WorldWideScience

Sample records for extremely large number

  1. Analytical robustness of nine common assays: frequency of outliers and extreme differences identified by a large number of duplicate measurements.

    Science.gov (United States)

    Neubig, Stefanie; Grotevendt, Anne; Kallner, Anders; Nauck, Matthias; Petersmann, Astrid

    2017-02-15

    Duplicate measurements can be used to describe the performance and analytical robustness of assays and to identify outliers. We performed about 235,000 duplicate measurements of nine routinely measured quantities and evaluated the observed differences between the replicates to develop new markers for analytical performance and robustness. Catalytic activity concentrations of aspartate aminotransferase (AST), alanine aminotransferase (ALT), and concentrations of calcium, cholesterol, creatinine, C-reactive protein (CRP), lactate, triglycerides and thyroid-stimulating hormone (TSH) in 237,261 patient plasma samples were measured in replicates using routine methods. The performance of duplicate measurements was evaluated in scatterplots with a variable and symmetrical zone of acceptance (A-zone) around the equal line. Two quality markers were established: 1) AZ95: the width of an A-zone at which 95% of all duplicate measurements were within this zone; and 2) OPM (outliers per mille): the relative number of outliers if an A-zone width of 5% was applied. The AZ95 ranges from 3.2% for calcium to 11.5% for CRP and the OPM from 5 (calcium) to 250 (creatinine). Calcium, TSH and cholesterol have an AZ95 of less than 5% and an OPM of less than 50. Duplicate measurements of a large number of patient samples identify even low frequencies of extreme differences and thereof defined outliers. We suggest two additional quality markers, AZ95 and OPM, to complement description of assay performance and robustness. This approach can aid the selection process of measurement procedures in view of clinical needs.

  2. Estimating Large Numbers

    Science.gov (United States)

    Landy, David; Silbert, Noah; Goldin, Aleah

    2013-01-01

    Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…

  3. Large Extremity Peripheral Nerve Repair

    Science.gov (United States)

    2015-10-01

    IL, Kochevar IE, Redmond RW. Large extremity peripheral nerve repair. Military Health System Research Symposium (MHSRS) Fort Lauderdale, FL. August...some notable discoveries that may impact military health care in the near future. There is a clear need in military medicine to improve outcomes in...membranes or “caul” intact was considered extremely lucky. Children were gifted with life-long happiness , the ability to see spirits, and protection

  4. Large Extremity Peripheral Nerve Repair

    Science.gov (United States)

    2015-10-01

    MB, Roberts AB, Wakersfield LM, de Crombrugghe B. Some recent advances in the chemistry and biology of trans- forming growth factor-beta. J Cell Biol...animal facility and had access to food and water as required. 59 Copyright © 2015 American Society of Plastic Surgeons. Unauthorized reproduction...s): F1 Art : PRS182917 Input-nlm 69 Manuscript 3: Large Gap Nerve Reconstruction Using Acellular Nerve Allografts And Photochemical Tissue

  5. Numbers Defy the Law of Large Numbers

    Science.gov (United States)

    Falk, Ruma; Lann, Avital Lavie

    2015-01-01

    As the number of independent tosses of a fair coin grows, the rates of heads and tails tend to equality. This is misinterpreted by many students as being true also for the absolute numbers of the two outcomes, which, conversely, depart unboundedly from each other in the process. Eradicating that misconception, as by coin-tossing experiments,…

  6. Dynamos at extreme magnetic Prandtl numbers

    CERN Document Server

    Verma, Mahendra K

    2015-01-01

    We present a MHD shell model suitable for the computation of various energy fluxes of magnetohydrodynamic turbulence for very small and very large magnetic Prandtl numbers $\\mathrm{Pm}$; such computations are inaccessible to direct numerical simulations. For small $\\mathrm{Pm}$, we observe that the both kinetic and magnetic energy spectra scale as $k^{-5/3}$ in the inertial range, but the dissipative magnetic energy scales as $k^{-17/3}$. Here, the kinetic energy at large length scale feeds the large-scale magnetic field that cascades to small-scale magnetic field, which gets dissipated by Joule heating. The large $\\mathrm{Pm}$ dynamo has a similar behaviour except that the dissipative kinetic energy scales as $k^{-13/3}$. For this case, the large-scale velocity field transfers energy to large-scale magnetic field, which gets transferred to small-scale velocity and magnetic fields. The energy of the small-scale magnetic field also gets transferred to the small-scale velocity field. The energy accumulated at s...

  7. CFRP lightweight structures for extremely large telescopes

    DEFF Research Database (Denmark)

    Jessen, Niels Christian; Nørgaard-Nielsen, Hans Ulrik; Schroll, J.

    2008-01-01

    Telescope structures are traditionally built out of steel. To improve the possibility of realizing the ambitious extremely large telescopes, materials with a higher specific stiffness and a lower coefficient of thermal expansion are needed. An important possibility is Carbon Fibre Reinforced...... Plastic (CFRP). The advantages of using CFRP for the secondary mirror support structure of the European overwhelmingly large telescope are discussed....

  8. Control challenges for extremely large telescopes

    Science.gov (United States)

    MacMartin, Douglas G.

    2003-08-01

    The next generation of large ground-based optical telescopes are likely to involve a highly segmented primary mirror that must be controlled in the presence of wind and other disturbances, resulting in a new set of challenges for control. The current design concept for the California Extremely Large Telescope (CELT) includes 1080 segments in the primary mirror, with the out-of-plane degrees of freedom actively controlled. In addition to the 3240 primary mirror actuators, the secondary mirror of the telescope will also require at least 5 degree of freedom control. The bandwidth of both control systems will be limited by coupling to structural modes. I discuss three control issues for extremely large telescopes in the context of the CELT design, describing both the status and remaining challenges. First, with many actuators and sensors, the cost and reliability of the control hardware is critical; the hardware requirements and current actuator design are discussed. Second, wind buffeting due to turbulence inside the telescope enclosure is likely to drive the control bandwidth higher, and hence limitations resulting from control-structure-interaction must be understood. Finally, the impact on the control architecture is briefly discussed.

  9. Instrumentation for the California Extremely Large Telescope

    Science.gov (United States)

    Taylor, Keith; McLean, Ian S.

    2003-03-01

    The Phase A study for the California Extremely Large Telescope (CELT) Project has recently been completed. As part of this exercise a working group was set-up to evolve instrumentation strategies matched to the scientific case for the CELT facility. We report here on the proposed initial instrument suite which includes not only massively multiplexed seeing-limited multi-object spectroscopy but also on plans for wide-field adaptive optics fed integral-field spectroscopy and imaging at, or approaching, CELT's diffraction limit.

  10. Extremely Large Images: Considerations for Contemporary Approach

    CERN Document Server

    Kitaeff, Slava; Wu, Chen; Taubman, David

    2013-01-01

    The new widefi?eld radio telescopes, such as: ASKAP,MWA, LOFAR, eVLA and SKA; will produce spectral-imaging data-cubes (SIDC) of unprecedented volumes in the order of hundreds of Petabytes. Servicing such data as images to the end-user may encounter challenges unforeseen during the development of IVOA SIAP. We discuss the requirements for extremely large SIDC, and in this light we analyse the applicability of approach taken in the ISO/IEC 15444 (JPEG2000) standards.

  11. Reading the World through Very Large Numbers

    Science.gov (United States)

    Greer, Brian; Mukhopadhyay, Swapna

    2010-01-01

    One original, and continuing, source of interest in large numbers is observation of the natural world, such as trying to count the stars on a clear night or contemplation of the number of grains of sand on the seashore. Indeed, a search of the internet quickly reveals many discussions of the relative numbers of stars and grains of sand. Big…

  12. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  13. Large Numbers and Calculators: A Classroom Activity.

    Science.gov (United States)

    Arcavi, Abraham; Hadas, Nurit

    1989-01-01

    Described is an activity demonstrating how a scientific calculator can be used in a mathematics classroom to introduce new content while studying a conventional topic. Examples of reading and writing large numbers, and reading hidden results are provided. (YP)

  14. Large number discrimination in newborn fish.

    Directory of Open Access Journals (Sweden)

    Laura Piffer

    Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.

  15. Large numbers hypothesis. I - Classical formalism

    Science.gov (United States)

    Adams, P. J.

    1982-01-01

    A self-consistent formulation of physics at the classical level embodying Dirac's large numbers hypothesis (LNH) is developed based on units covariance. A scalar 'field' phi(x) is introduced and some fundamental results are derived from the resultant equations. Some unusual properties of phi are noted such as the fact that phi cannot be the correspondence limit of a normal quantum scalar field.

  16. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  17. Estimation of the number of extreme pathways for metabolic networks

    Directory of Open Access Journals (Sweden)

    Thiele Ines

    2007-09-01

    Full Text Available Abstract Background The set of extreme pathways (ExPa, {pi}, defines the convex basis vectors used for the mathematical characterization of the null space of the stoichiometric matrix for biochemical reaction networks. ExPa analysis has been used for a number of studies to determine properties of metabolic networks as well as to obtain insight into their physiological and functional states in silico. However, the number of ExPas, p = |{pi}|, grows with the size and complexity of the network being studied, and this poses a computational challenge. For this study, we investigated the relationship between the number of extreme pathways and simple network properties. Results We established an estimating function for the number of ExPas using these easily obtainable network measurements. In particular, it was found that log [p] had an exponential relationship with log⁡[∑i=1Rd−id+ici] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacyGGSbaBcqGGVbWBcqGGNbWzdaWadaqaamaaqadabaGaemizaq2aaSbaaSqaaiabgkHiTmaaBaaameaacqWGPbqAaeqaaaWcbeaakiabdsgaKnaaBaaaleaacqGHRaWkdaWgaaadbaGaemyAaKgabeaaaSqabaGccqWGJbWydaWgaaWcbaGaemyAaKgabeaaaeaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGsbGua0GaeyyeIuoaaOGaay5waiaaw2faaaaa@4414@, where R = |Reff| is the number of active reactions in a network, d−i MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGKbazdaWgaaWcbaGaeyOeI0YaaSbaaWqaaiabdMgaPbqabaaaleqaaaaa@30A9@ and d+i MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb

  18. Extremes in Otolaryngology Resident Surgical Case Numbers: An Update.

    Science.gov (United States)

    Baugh, Tiffany P; Franzese, Christine B

    2017-06-01

    Objectives The purpose of this study is to examine the effect of minimum case numbers on otolaryngology resident case log data and understand differences in minimum, mean, and maximum among certain procedures as a follow-up to a prior study. Study Design Cross-sectional survey using a national database. Setting Academic otolaryngology residency programs. Subjects and Methods Review of otolaryngology resident national data reports from the Accreditation Council for Graduate Medical Education (ACGME) resident case log system performed from 2004 to 2015. Minimum, mean, standard deviation, and maximum values for total number of supervisor and resident surgeon cases and for specific surgical procedures were compared. Results The mean total number of resident surgeon cases for residents graduating from 2011 to 2015 ranged from 1833.3 ± 484 in 2011 to 2072.3 ± 548 in 2014. The minimum total number of cases ranged from 826 in 2014 to 1004 in 2015. The maximum total number of cases increased from 3545 in 2011 to 4580 in 2015. Multiple key indicator procedures had less than the required minimum reported in 2015. Conclusion Despite the ACGME instituting required minimum numbers for key indicator procedures, residents have graduated without meeting these minimums. Furthermore, there continues to be large variations in the minimum, mean, and maximum numbers for many procedures. Variation among resident case numbers is likely multifactorial. Ensuring proper instruction on coding and case role as well as emphasizing frequent logging by residents will ensure programs have the most accurate data to evaluate their case volume.

  19. Dynamos at extreme magnetic Prandtl numbers: insights from shell models

    Science.gov (United States)

    Verma, Mahendra K.; Kumar, Rohit

    2016-12-01

    We present an MHD shell model suitable for computation of various energy fluxes of magnetohydrodynamic turbulence for very small and very large magnetic Prandtl numbers $\\mathrm{Pm}$; such computations are inaccessible to direct numerical simulations. For small $\\mathrm{Pm}$, we observe that both kinetic and magnetic energy spectra scale as $k^{-5/3}$ in the inertial range, but the dissipative magnetic energy scales as $k^{-11/3}\\exp(-k/k_\\eta)$. Here, the kinetic energy at large length scale feeds the large-scale magnetic field that cascades to small-scale magnetic field, which gets dissipated by Joule heating. The large-$\\mathrm{Pm}$ dynamo has a similar behaviour except that the dissipative kinetic energy scales as $k^{-13/3}$. For this case, the large-scale velocity field transfers energy to the large-scale magnetic field, which gets transferred to small-scale velocity and magnetic fields; the energy of the small-scale magnetic field also gets transferred to the small-scale velocity field, and the energy thus accumulated is dissipated by the viscous force.

  20. Simulation of the RCS Range Resolution of Extremely Large Target

    Institute of Scientific and Technical Information of China (English)

    WANG Sheng; XIONG Qian; JIANG Ai-ping; XIA Ying-qing; XU Peng-gen

    2005-01-01

    The high frequency hybrid technique based on an iterative Physical Optics (PO) and the method of equivalent current (MEC) approach is developed for predicting range resolution of the Radar Cross Section (RCS) in the spatial domain. We introduce the hybrid high frequency method to simulate range resolution of the extremely large target in the near zone. This paper applies this method to simulate the range resolution of the two 1 m× 1 m plates and the ship.The study improves the speed of simulating the range resolution of the extremely large target and is prepared for the application of the extrapolation and interpolation in the spatial domain.

  1. Report from the 4th Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2011-02-01

    Full Text Available Academic and industrial users are increasingly facing the challenge of petabytes of data, but managing and analyzing such large data sets still remains a daunting task. The 4th Extremely Large Databases workshop was organized to examine the needs of communities under-represented at the past workshops facing these issues. Approaches to big data statistical analytics as well as emerging opportunities related to emerging hardware technologies were also debated. Writable extreme scale databases and the science benchmark were discussed. This paper is the final report of the discussions and activities at this workshop.

  2. Classical to quantum in large number limit

    CERN Document Server

    Modi, Kavan; Pascazio, Saverio; Vedral, Vlatko; Yuasa, Kazuya

    2011-01-01

    We construct a quantumness witness following the work of Alicki and van Ryn (AvR) in "A simple test of quantumness for a single system" [J. Phys. A: Math. Theor., 41, 062001 (2008)]. The AvR test is designed to detect quantumness. We reformulate the AvR test by defining it for quantum states rather than for observables. This allows us to identify the necessary quantities and resources to detect quantumness for any given system. The first quantity turns out to be the purity of the system. When applying the witness to a system with even moderate mixedness the protocol is unable to reveal any quantumness. We then show that having many copies of the system leads the witness to reveal quantumness. This seems contrary to the Bohr correspondence, which asserts that in the large number limit quantum systems become classical, while the witness shows quantumness when several non-quantum systems, as determined by the witness, are considered together. However, the resources required to detect the quantumness increase dra...

  3. Design concepts for the California Extremely Large Telescope (CELT)

    Science.gov (United States)

    Nelson, Jerry E.

    2000-08-01

    The California Extremely Large Telescope is a study currently underway by the University of California and the California Institute of Technology, to assess the feasibility of building a 30-m ground based telescope that will push the frontiers to observational astronomy. The telescope will be fully steerable, with a large field of view, and be able to work in both a seeing-limited arena and as a diffraction-limited telescope, with adaptive optics.

  4. Laws of small numbers extremes and rare events

    CERN Document Server

    Falk, Michael; Hüsler, Jürg

    2004-01-01

    Since the publication of the first edition of this seminar book in 1994, the theory and applications of extremes and rare events have enjoyed an enormous and still increasing interest. The intention of the book is to give a mathematically oriented development of the theory of rare events underlying various applications. This characteristic of the book was strengthened in the second edition by incorporating various new results on about 130 additional pages. Part II, which has been added in the second edition, discusses recent developments in multivariate extreme value theory. Particularly notable is a new spectral decomposition of multivariate distributions in univariate ones which makes multivariate questions more accessible in theory and practice. One of the most innovative and fruitful topics during the last decades was the introduction of generalized Pareto distributions in the univariate extreme value theory. Such a statistical modelling of extremes is now systematically developed in the multivariate fram...

  5. Imaging extrasolar planets with the European Extremely Large Telescope

    Directory of Open Access Journals (Sweden)

    Jolissaint L.

    2011-07-01

    Full Text Available The European Extremely Large Telescope (E-ELT is the most ambitious of the ELTs being planned. With a diameter of 42 m and being fully adaptive from the start, the E-ELT will be more than one hundred times more sensitive than the present-day largest optical telescopes. Discovering and characterising planets around other stars will be one of the most important aspects of the E-ELT science programme. We model an extreme adaptive optics instrument on the E-ELT. The resulting contrast curves translate to the detectability of exoplanets.

  6. Report from the 3rd Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2010-02-01

    Full Text Available Academic and industrial users are increasingly facing the challenge of petabytes of data, but managing and analyzing such large data sets still remains a daunting task. Both the database and the map/reduce communities worldwide are working on addressing these issues. The 3rdExtremely Large Databases workshop was organized to examine the needs of scientific communities beginning to face these issues, to reach out to European communities working on extremely large scale data challenges, and to brainstorm possible solutions. The science benchmark that emerged from the 2nd workshop in this series was also debated. This paper is the final report of the discussions and activities at this workshop.

  7. Large Chip Production Mechanism under the Extreme Load Cutting Conditions

    Institute of Scientific and Technical Information of China (English)

    LIU Xianli; HE Genghuang; YAN Fugang; CHENG Yaonan; LIU Li

    2015-01-01

    There has existed a great deal of theory researches in term of chip production and chip breaking characteristics under conventional cutting and high speed cutting conditions, however, there isn’t sufficient research on chip formation mechanism as well as its influence on cutting state regarding large workpieces under extreme load cutting. This paper presents a model of large saw-tooth chip through applying finite element simulation method, which gives a profound analysis about the characteristics of the extreme load cutting as well as morphology and removal of the large chip. In the meantime, a calculation formula that gives a quantitative description of the saw-tooth level regarding the large chip is established on the basis of cutting experiments on high temperature and high strength steel 2.25Cr-1Mo-0.25V. The cutting experiments are carried out by using the scanning electron microscope and super depth of field electron microscope to measure and calculate the large chip produced under different cutting parameters, which can verify the validity of the established model. The calculating results show that the large saw-toothed chip is produced under the squeezing action between workpiece and cutting tools. In the meanwhile, the chip develops a hardened layer where contacts the cutting tool and the saw-tooth of the chip tend to form in transverse direction. This research creates the theoretical model for large chip and performs the cutting experiments under the extreme load cutting condition, as well as analyzes the production mechanism of the large chip in the macro and micro conditions. Therefore, the proposed research could provide theoretical guidance and technical support in improving productivity and cutting technology research.

  8. Acceleration Detection of Large (Probably Prime Numbers

    Directory of Open Access Journals (Sweden)

    Dragan Vidakovic

    2013-02-01

    Full Text Available In order to avoid unnecessary applications of Miller-Rabin algorithm to the number in question, we resortto trial division by a few initial prime numbers, since such a division take less time. How far we should gowith such a division is the that we are trying to answer in this paper?For the theory of the matter is fullyresolved. However, that in practice we do not have much use.Therefore, we present a solution that isprobably irrelevant to theorists, but it is very useful to people who have spent many nights to producelarge (probably prime numbers using its own software.

  9. Properties of Zerodur mirror blanks for extremely large telescopes

    Science.gov (United States)

    Döhring, Thorsten; Hartmann, Peter; Jedamzik, Ralf; Thomas, Armin; Lentes, Frank-Thomas

    2006-02-01

    SCHOTT produces the zero expansion glass ceramics material ZERODUR since 35 years. More than 250 ZERODUR mirror blanks were already delivered for the large segmented mirror telescopes KECK I, KECK II, HET, GTC, and LAMOST. Now several extremely large telescope (ELT) projects are in discussion, which are designed with even larger primary mirrors (TMT, OWL, EURO50, JELT, CFGT, GMT). These telescopes can be achieved also only by segmentation of the primary mirror. Based on the results of the recent production of segment blanks for the GTC project the general requirements of mirror blanks for future extremely large telescope projects have been evaluated. The specification regarding the material quality and blank geometry is discussed in detail. As the planned mass production of mirror blanks for ELT's will last for several years, economic factors are getting even more important for the success of the projects. SCHOTT is a global enterprise with a solid economical basis and therefore an ideal partner for the mirror blank delivery of extremely large telescopes.

  10. Report from the 2nd Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2009-03-01

    Full Text Available The complexity and sophistication of large scale analytics in science and industry have advanced dramatically in recent years. Analysts are struggling to use complex techniques such as time series analysis and classification algorithms because their familiar, powerful tools are not scalable and cannot effectively use scalable database systems. The 2nd Extremely Large Databases (XLDB workshop was organized to understand these issues, examine their implications, and brainstorm possible solutions. The design of a new open source science database, SciDB that emerged from the first workshop in this series was also debated. This paper is the final report of the discussions and activities at this workshop.

  11. Direct Imaging of Exoplanets, from very large to extremely large telescopes

    Science.gov (United States)

    Kasper, M.

    2015-10-01

    Presently, dedicated instruments at 8-m class telescopes (SPHERE for the VLT, GPI for Gemini) are about to discover and explore self-luminous giant planets by direct imaging and spectroscopy. In a decade, the next generation of 30m-40m ground-based Extremely Large Telescopes (ELTs) have the potential to dramatically enlarge the discovery space towards older giant planets seen in the reflected light and ultimately even a small number of rocky planets. In order to fulfill the demanding contrast requirements of a part in a million to a part in a billion at separations of one tenth of an arcsecond, the seeing limited PSF contrast must gradually be improved by eXtreme Adaptive optics (XAO), non-common path aberration compensation, coronagraphy, and science image post-processing. None of these steps alone is sufficient to leap the enormous contrast. High-contrast imaging (HCI) from the ground encompasses all those disciplines which are to be considered in a system approach. The presentation will introduce the principle of HCI and present the current implementation in the SPHERE, ESO's imager for giant exoplanets at the VLT. It will then discuss requirements and necessary R&D to reach the ultimate goal, observing terrestrial Exoplanets with the next generation of instruments for the ELTs.

  12. Challenges in optics for Extremely Large Telescope instrumentation

    CERN Document Server

    Span`o, P; Norrie, C J; Cunningham, C R; Strassmeier, K G; Bianco, A; Blanche, P A; Bougoin, M; Ghigo, M; Hartmann, P; Zago, L; Atad-Ettedgui, E; Delabre, B; Dekker, H; Melozzi, M; Snyders, B; Takke, R; Walker, D D

    2006-01-01

    We describe and summarize the optical challenges for future instrumentation for Extremely Large Telescopes (ELTs). Knowing the complex instrumental requirements is crucial for the successful design of 30-60m aperture telescopes. After all, the success of ELTs will heavily rely on its instrumentation and this, in turn, will depend on the ability to produce large and ultra-precise optical components like light-weight mirrors, aspheric lenses, segmented filters, and large gratings. New materials and manufacturing processes are currently under study, both at research institutes and in industry. In the present paper, we report on its progress with particular emphasize on volume-phase-holographic gratings, photochromic materials, sintered silicon-carbide mirrors, ion-beam figuring, ultra-precision surfaces, and free-form optics. All are promising technologies opening new degrees of freedom to optical designers. New optronic-mechanical systems will enable efficient use of the very large focal planes. We also provide...

  13. Large Scale Meteorological Pattern of Extreme Rainfall in Indonesia

    Science.gov (United States)

    Kuswanto, Heri; Grotjahn, Richard; Rachmi, Arinda; Suhermi, Novri; Oktania, Erma; Wijaya, Yosep

    2014-05-01

    Extreme Weather Events (EWEs) cause negative impacts socially, economically, and environmentally. Considering these facts, forecasting EWEs is crucial work. Indonesia has been identified as being among the countries most vulnerable to the risk of natural disasters, such as floods, heat waves, and droughts. Current forecasting of extreme events in Indonesia is carried out by interpreting synoptic maps for several fields without taking into account the link between the observed events in the 'target' area with remote conditions. This situation may cause misidentification of the event leading to an inaccurate prediction. Grotjahn and Faure (2008) compute composite maps from extreme events (including heat waves and intense rainfall) to help forecasters identify such events in model output. The composite maps show large scale meteorological patterns (LSMP) that occurred during historical EWEs. Some vital information about the EWEs can be acquired from studying such maps, in addition to providing forecaster guidance. Such maps have robust mid-latitude meteorological patterns (for Sacramento and California Central Valley, USA EWEs). We study the performance of the composite approach for tropical weather condition such as Indonesia. Initially, the composite maps are developed to identify and forecast the extreme weather events in Indramayu district- West Java, the main producer of rice in Indonesia and contributes to about 60% of the national total rice production. Studying extreme weather events happening in Indramayu is important since EWEs there affect national agricultural and fisheries activities. During a recent EWE more than a thousand houses in Indramayu suffered from serious flooding with each home more than one meter underwater. The flood also destroyed a thousand hectares of rice plantings in 5 regencies. Identifying the dates of extreme events is one of the most important steps and has to be carried out carefully. An approach has been applied to identify the

  14. Report from the 5th Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2012-03-01

    Full Text Available The 5th XLDB workshop brought together scientific and industrial users, developers, and researchers of extremely large data and focused on emerging challenges in the healthcare and genomics communities, spreadsheet-based large scale analysis, and challenges in applying statistics to large scale analysis, including machine learning. Major problems discussed were the lack of scalable applications, the lack of expertise in developing solutions, the lack of respect for or attention to big data problems, data volume growth exceeding Moore's Law, poorly scaling algorithms, and poor data quality and integration. More communication between users, developers, and researchers is sorely needed. A variety of future work to help all three groups was discussed, ranging from collecting challenge problems to connecting with particular industrial or academic sectors.

  15. Progress on the California Extremely Large Telescope (CELT)

    Science.gov (United States)

    Nelson, Jerry E.

    2003-01-01

    The California Extremely Large Telescope (CELT) is a joint project of the University of California and the California Institute of Technology to build and operate a 30-meter diameter telescope for research in astronomy at visible and infrared wavelengths. The current optical design calls for a primary, secondary, and tertiary mirror with Ritchey-Chretién foci at two Nasmyth platforms. The primary mirror is a mosaic of 1080 actively stabilized hexagonal segments. This paper summarizes the recent progress on the conceptual design of this telescope.

  16. Report from the 6th Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Daniel Liwei Wang

    2013-05-01

    Full Text Available Petascale data management and analysis remain one of the main unresolved challenges in today's computing. The 6th Extremely Large Databases workshop was convened alongside the XLDB conference to discuss the challenges in the health care, biology, and natural resources communities. The role of cloud computing, the dominance of file-based solutions in science applications, in-situ and predictive analysis, and commercial software use in academic environments were discussed in depth as well. This paper summarizes the discussions of this workshop.

  17. On Some Numbers Related to Extremal Combinatorial Sum Problems

    Directory of Open Access Journals (Sweden)

    D. Petrassi

    2014-01-01

    Full Text Available Let n, d, and r be three integers such that 1≤r, d≤n. Chiaselotti (2002 defined γn,d,r as the minimum number of the nonnegative partial sums with d summands of a sum ∑1=1nai≥0, where a1,…,an are n real numbers arbitrarily chosen in such a way that r of them are nonnegative and the remaining n-r are negative. Chiaselotti (2002 and Chiaselotti et al. (2008 determine the values of γn,d,r for particular infinite ranges of the integer parameters n, d, and r. In this paper we continue their approach on this problem and we prove the following results: (i γ(n,d,r≤(rd+(rd-1 for all values of n, d, and r such that (d-1/dn-1≤r≤(d-1/dn; (ii γd+2,d,d=d+1.

  18. Extremely Large Telescope Project Selected in ESFRI Roadmap

    Science.gov (United States)

    2006-10-01

    In its first Roadmap, the European Strategy Forum on Research Infrastructures (ESFRI) choose the European Extremely Large Telescope (ELT), for which ESO is presently developing a Reference Design, as one of the large scale projects to be conducted in astronomy, and the only one in optical astronomy. The aim of the ELT project is to build before the end of the next decade an optical/near-infrared telescope with a diameter in the 30-60m range. ESO PR Photo 40/06 The ESFRI Roadmap states: "Extremely Large Telescopes are seen world-wide as one of the highest priorities in ground-based astronomy. They will vastly advance astrophysical knowledge allowing detailed studies of inter alia planets around other stars, the first objects in the Universe, super-massive Black Holes, and the nature and distribution of the Dark Matter and Dark Energy which dominate the Universe. The European Extremely Large Telescope project will maintain and reinforce Europe's position at the forefront of astrophysical research." Said Catherine Cesarsky, Director General of ESO: "In 2004, the ESO Council mandated ESO to play a leading role in the development of an ELT for Europe's astronomers. To that end, ESO has undertaken conceptual studies for ELTs and is currently also leading a consortium of European institutes engaged in studying enabling technologies for such a telescope. The inclusion of the ELT in the ESFRI roadmap, together with the comprehensive preparatory work already done, paves the way for the next phase of this exciting project, the design phase." ESO is currently working, in close collaboration with the European astronomical community and the industry, on a baseline design for an Extremely Large Telescope. The plan is a telescope with a primary mirror between 30 and 60 metres in diameter and a financial envelope of about 750 m Euros. It aims at more than a factor ten improvement in overall performance compared to the current leader in ground based astronomy: the ESO Very Large

  19. Extreme value statistics and thermodynamics of earthquakes: large earthquakes

    Directory of Open Access Journals (Sweden)

    B. H. Lavenda

    2000-06-01

    Full Text Available A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershock sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Fréchet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions shows that self-similar power laws are transformed into nonscaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Fréchet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same Catalogue of Chinese Earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Fréchet distribution. Earthquaketemperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  20. Extreme value statistics and thermodynamics of earthquakes. Large earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lavenda, B. [Camerino Univ., Camerino, MC (Italy); Cipollone, E. [ENEA, Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). National Centre for Research on Thermodynamics

    2000-06-01

    A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershocks sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Frechet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions show that self-similar power laws are transformed into non scaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Frechet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same catalogue of Chinese earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Frechet distribution. Earthquake temperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  1. Challenges in optics for Extremely Large Telescope instrumentation

    Science.gov (United States)

    Spanò, P.; Zerbi, F. M.; Norrie, C. J.; Cunningham, C. R.; Strassmeier, K. G.; Bianco, A.; Blanche, P. A.; Bougoin, M.; Ghigo, M.; Hartmann, P.; Zago, L.; Atad-Ettedgui, E.; Delabre, B.; Dekker, H.; Melozzi, M.; Snÿders, B.; Takke, R.

    2006-08-01

    We describe and summarize the optical challenges for future instrumentation for Extremely Large Telescopes (ELTs). Knowing the complex instrumental requirements is crucial for the successful design of 30-60 m aperture telescopes. After all, the success of ELTs will heavily rely on its instrumentation and this, in turn, will depend on the ability to produce large and ultra-precise optical components like light-weight mirrors, aspheric lenses, segmented filters, and large gratings. New materials and manufacturing processes are currently under study, both at research institutes and in industry. In the present paper, we report on its progress with particular emphasize on volume-phase-holographic gratings, photochromic materials, sintered silicon-carbide mirrors, ion-beam figuring, ultra-precision surfaces, and free-form optics. All are promising technologies opening new degrees of freedom to optical designers. New optronic-mechanical systems will enable efficient use of the very large focal planes. We also provide exploratory descriptions of ``old'' and ``new'' optical technologies together with suggestions to instrument designers to overcome some of the challenges placed by ELT instrumentation.

  2. Control of the California Extremely Large Telescope primary mirror

    Science.gov (United States)

    MacMartin, Douglas G.; Chanan, Gary A.

    2003-01-01

    The current design concept for the California Extremely Large Telescope (CELT) includes 1080 segments in the primary mirror, with the out-of-plane degrees of freedom actively controlled. We construct the control matrix for this active control system, and describe its singular modes and sensor noise propagation. Data from the Keck telescopes are used to generate realistic estimates of the control system contributions to the CELT wavefront error and wavefront gradient error. Based on these estimates, control system noise will not significantly degrade either seeing-limited or diffraction-limited observations. The use of supplemental wavefront information for real-time control is therefore not necessary. We also comment briefly on control system bandwidth requirements and limitations.

  3. Origin of the extremely large magnetoresistance in the semimetal YSb

    Science.gov (United States)

    Xu, J.; Ghimire, N. J.; Jiang, J. S.; Xiao, Z. L.; Botana, A. S.; Wang, Y. L.; Hao, Y.; Pearson, J. E.; Kwok, W. K.

    2017-08-01

    Electron-hole (e -h ) compensation is a hallmark of multiband semimetals with extremely large magnetoresistance (XMR) and has been considered to be the basis for XMR. Recent spectroscopic experiments, however, reveal that YSb with nonsaturating magnetoresistance is uncompensated, questioning the e -h compensation scenario for XMR. Here we demonstrate with magnetoresistivity and angle-dependent Shubnikov-de Haas (SdH) quantum oscillation measurements that YSb does have nearly perfect e -h compensation, with a density ratio of ˜0.95 for electrons and holes. The density and mobility anisotropy of the charge carriers revealed in the SdH experiments allow us to quantitatively describe the magnetoresistance with an anisotropic multiband model that includes contributions from all Fermi pockets. We elucidate the role of compensated multibands in the occurrence of XMR by demonstrating the evolution of calculated magnetoresistances for a single band and for various combinations of electron and hole Fermi pockets.

  4. European Extremely Large Telescope Site Characterization I: Overview

    Science.gov (United States)

    Vernin, Jean; Muñoz-Tuñón, Casiana; Sarazin, Marc; Vazquez Ramió, Héctor; Varela, Antonia M.; Trinquet, Hervé; Delgado, José Miguel; Jiménez Fuensalida, Jesús; Reyes, Marcos; Benhida, Abdelmajid; Benkhaldoun, Zouhair; García Lambas, Diego; Hach, Youssef; Lazrek, M.; Lombardi, Gianluca; Navarrete, Julio; Recabarren, Pablo; Renzi, Victor; Sabil, Mohammed; Vrech, Rubén

    2011-11-01

    The site for the future European Extremely Large Telescope (E-ELT) is already known to be Armazones, near Paranal (Chile). The selection was based on a variety of considerations, with an important one being the quality of the atmosphere for the astronomy planned for the ELT. We present an overview of the characterization of the atmospheric parameters of candidate sites, making use of standard procedures and instruments as carried out within the Framework Programme VI (FP6) of the European Union. We have achieved full characterization of the selected sites for the parameters considered. Further details on adaptive optics results and climatology will be the subject of two forthcoming articles. A summary of the results of the FP6 site-testing campaigns at the different sites is provided.

  5. Stellar Crowding and the Science Case for Extremely Large Telescopes

    CERN Document Server

    Olsen, K A G; Rigaut, F; Olsen, Knut A.G.; Blum, Robert D.; Rigaut, Francois

    2003-01-01

    We present a study of the effect of crowding on stellar photometry. We develop an analytical model through which we are able to predict the error in magnitude and color for a given star for any combination of telescope resolution, stellar luminosity function, background surface brightness, and distance. We test our predictions with Monte Carlo simulations of the LMC globular cluster NGC 1835, for resolutions corresponding to a seeing-limited telescope, the $HST$, and an AO-corrected 30-m (near diffraction limited) telescope. Our analytically predicted magnitude errors agree with the simulation results to within $\\sim$20%. The analytical model also predicts that errors in color are strongly affected by the correlation of crowding--induced photometric errors between bands as is seen in the simulations. Using additional Monte Carlo simulations and our analytical crowding model, we investigate the photometric accuracy which 30-m and 100-m Extremely Large Telescopes (ELTs) will be able to achieve at distances exte...

  6. Extreme Associated Functions: Optimally Linking Local Extremes to Large-scale Atmospheric Circulation Structures

    CERN Document Server

    Panja, Debabrata

    2007-01-01

    We present a new statistical method to optimally link local weather extremes to large-scale atmospheric circulation structures. The method is illustrated using July-August daily mean temperature at 2m height (T2m) time-series over the Netherlands and 500 hPa geopotential height (Z500) time-series over the Euroatlantic region of the ECMWF reanalysis dataset (ERA40). The method identifies patterns in the Z500 time-series that optimally describe, in a precise mathematical sense, the relationship with local warm extremes in the Netherlands. Two patterns are identified; the most important one corresponds to a blocking high pressure system leading to subsidence and calm, dry and sunny conditions over the Netherlands. The second one corresponds to a rare, easterly flow regime bringing warm, dry air into the region. The patterns are robust; they are also identified in shorter subsamples of the total dataset. The method is generally applicable and might prove useful in evaluating the performance of climate models in s...

  7. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  8. Single sided tomography of extremely large dense objects

    Energy Technology Data Exchange (ETDEWEB)

    Thoe, R.S.

    1993-03-24

    One can envision many circumstances where radiography could be valuable but is frustrated by the geometry of the object to be radiographed. For example, extremely large objects, the separation of rocket propellants from the skin of solid fuel rocket motor, the structural integrity of an underground tank or hull of a ship, the location of buried objects, inspection of large castings etc. The author has been investigating ways to do this type of radiography and as a result has developed a technique which can be used to obtain three dimensional radiographs using Compton scattered radiation from a monochromatic source and a high efficiency, high resolution germanium spectrometer. This paper gives specific details of the reconstruction technique and presents the results of numerous numerical simulations and compares these simulations to spectra obtained in the laboratory. In addition the author presents the results of calculations made for the development of an alternative single sided radiography technique which will permit inspection of the interior of large objects. As a benchmark the author seeks to obtain three dimensional images with a resolution of about one cubic centimeter in a concrete cube 30 centimeters on a side. Such a device must use photons of very high energy. For example 30 cm of concrete represents about 15 mean free paths for photons of 100 keV, whereas at 1 MeV the attenuation is down to about five mean free paths. At these higher energies Compton scattering becomes much more probable. Although this would appear to be advantageous for single sided imaging techniques, such techniques are hampered by two side effects. In this paper the results are given of numerous Monte Carlo calculations detailing the extent of the multiple scattering and the feasibility of a variety of imaging schemes is explored.

  9. Design issues for the active control system of the California Extremely Large Telescope (CELT)

    Science.gov (United States)

    Chanan, Gary A.; Nelson, Jerry E.; Ohara, Catherine M.; Sirko, Edwin

    2000-08-01

    We explore the issues in the control and alignment of the primary mirror of the proposed 30 meter California Extremely Large Telescope and other very large telescopes with segmented primaries (consisting of 1000 or more segments). We show that as the number of segments increases, the noise in the telescope active control system (ACS) increases, roughly as (root)n. This likely means that, for a thousand segment telescope like CELT, Keck-style capacitive sensors will not be able to adequately monitor the lowest spatial frequency degrees of freedom of the primary mirror, and will therefore have to be supplemented by a Shack-Hartmann-type wavefront sensor. However, in the case of segment phasing, which is governed by a `control matrix' similar to that of the ACS, the corresponding noise is virtually independent of n. It follows that reasonably straightforward extensions of current techniques should be adequate to phase the extremely large telescopes of the future.

  10. On the white dwarf cooling sequence with extremely large telescopes

    CERN Document Server

    Bono, G; Gilmozzi, R

    2012-01-01

    We present new diagnostics of white dwarf (WD) cooling sequences and luminosity functions (LFs) in the near-infrared (NIR) bands that will exploit the sensitivity and resolution of future extremely large telescopes. The collision-induced absorption (CIA) of molecular hydrogen causes a clearly defined blue turn-off along the WD (WDBTO) cooling sequences and a bright secondary maximum in the WD LFs. These features are independent of age over a broad age range and are minimally affected by metal abundance. This means that the NIR magnitudes of the WDBTO are very promising distance indicators. The interplay between the cooling time of progressively more massive WDs and the onset of CIA causes a red turn-off along the WD (WDRTO) cooling sequences and a well defined faint peak in the WD LFs. These features are very sensitive to the cluster age, and indeed the K-band magnitude of the faint peak increases by 0.2--0.25 mag/Gyr for ages between 10 and 14 Gyr. On the other hand, the faint peak in the optical WD LF incre...

  11. A BRIGHTEST CLUSTER GALAXY WITH AN EXTREMELY LARGE FLAT CORE

    Energy Technology Data Exchange (ETDEWEB)

    Postman, Marc; Coe, Dan; Koekemoer, Anton; Bradley, Larry [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21208 (United States); Lauer, Tod R. [National Optical Astronomy Observatory, P.O. Box 26732, Tucson, AZ 85726 (United States); Donahue, Megan [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Graves, Genevieve [Department of Astronomy, 601 Campbell Hall, University of California, Berkeley, CA 94720 (United States); Moustakas, John [Center for Astrophysics and Space Sciences, University of California, La Jolla, CA 92093 (United States); Ford, Holland C.; Lemze, Doron; Medezinski, Elinor [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Grillo, Claudio [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Mariesvej 30, DK-2100 Copenhagen (Denmark); Zitrin, Adi [University of Heidelberg, Albert-Ueberle-Str. 2, D-69120 Heidelberg (Germany); Broadhurst, Tom [Department of Theoretical Physics, University of the Basque Country UPV/EHU, Bizkaia, E-48940 Leioa (Spain); Moustakas, Leonidas [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Ascaso, Begona [Instituto de Astrofisica de Andalucia (CSIC), C/Camino Bajo de Huetor 24, E-18008 Granada (Spain); Kelson, Daniel [The Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States)

    2012-09-10

    Hubble Space Telescope images of the galaxy cluster A2261, obtained as part of the Cluster Lensing And Supernova survey with Hubble, show that the brightest galaxy in the cluster, A2261-BCG, has the largest core yet detected in any galaxy. The cusp radius of A2261-BCG is 3.2 kpc, twice as big as the next largest core known, and {approx}3 Multiplication-Sign bigger than those typically seen in the most luminous brightest cluster galaxies. The morphology of the core in A2261-BCG is also unusual, having a completely flat interior surface brightness profile, rather than the typical shallow cusp rising into the center. This implies that the galaxy has a core with constant or even centrally decreasing stellar density. Interpretation of the core as an end product of the 'scouring' action of a binary supermassive black hole implies a total black hole mass {approx}10{sup 10} M{sub Sun} from the extrapolation of most relationships between core structure and black hole mass. The core falls 1{sigma} above the cusp radius versus galaxy luminosity relation. Its large size in real terms, and the extremely large black hole mass required to generate it, raises the possibility that the core has been enlarged by additional processes, such as the ejection of the black holes that originally generated the core. The flat central stellar density profile is consistent with this hypothesis. The core is also displaced by 0.7 kpc from the center of the surrounding envelope, consistent with a local dynamical perturbation of the core.

  12. Observing Planetary Nebulae with JWST and Extremely Large Telescopes

    Science.gov (United States)

    Sahai, Raghvendra

    2015-01-01

    Most stars in the Universe that leave the main sequence in a Hubble time will end their lives evolving through the Planetary Nebula (PN) evolutionary phase. The heavy mass loss which occurs during the preceding AGB phase is important across astrophysics, dramatically changing the course of stellar evolution, dominantly contributing to the dust content of the interstellar medium, and influencing its chemical composition. The evolution from the AGB phase to the PN phases remains poorly understood, especially the dramatic transformation that occurs in the morphology of the mass-ejecta as AGB stars and their round circumstellar envelopes evolve into mostly PNe, the majority of which deviate strongly from spherical symmetry. In addition, although the PN [OIII] luminosity function (PNLF) has been used as a standard candle (on par with distance indicators such as Cepheids), we do not understand why it works. It has been argued that the resolution of these issues may be linked to binarity and associated processes such as mass transfer and common envelope evolution.Thus, understanding the formation and evolution of PNe is of wide astrophysical importance. PNe have long been known to emit across a very large span of wavelengths, from the radio to X-rays. Extensive use of space-based observatories at X-ray (Chandra/ XMM-Newton), optical (HST) and far-infrared (Spitzer, Herschel) wavelengths in recent years has produced significant new advances in our knowledge of these objects. Given the expected advent of the James Webb Space Telescope in the near future, and ground-based Extremely Large Telescope(s) somewhat later, this talk will focus on future high-angular-resolution, high-sensitivity observations at near and mid-IR wavelengths with these facilities that can help in addressing the major unsolved problems in the study of PNe.

  13. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    Science.gov (United States)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  14. Resurrection of large lepton number asymmetries from neutrino flavor oscillations

    CERN Document Server

    Barenboim, Gabriela; Park, Wan-Il

    2016-01-01

    We numerically solve the evolution equations of neutrino three-flavor density matrices, and show that, even if neutrino oscillations mix neutrino flavors, large lepton number asymmetries are still allowed in certain limits by Big Bang Nucleosynthesis (BBN).

  15. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena i

  16. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena i

  17. Optical Design for Extremely Large Telescope Adaptive Optics Systems

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, B J

    2003-11-26

    Designing an adaptive optics (AO) system for extremely large telescopes (ELT's) will present new optical engineering challenges. Several of these challenges are addressed in this work, including first-order design of multi-conjugate adaptive optics (MCAO) systems, pyramid wavefront sensors (PWFS's), and laser guide star (LGS) spot elongation. MCAO systems need to be designed in consideration of various constraints, including deformable mirror size and correction height. The y,{bar y} method of first-order optical design is a graphical technique that uses a plot with marginal and chief ray heights as coordinates; the optical system is represented as a segmented line. This method is shown to be a powerful tool in designing MCAO systems. From these analyses, important conclusions about configurations are derived. PWFS's, which offer an alternative to Shack-Hartmann (SH) wavefront sensors (WFS's), are envisioned as the workhorse of layer-oriented adaptive optics. Current approaches use a 4-faceted glass pyramid to create a WFS analogous to a quad-cell SH WFS. PWFS's and SH WFS's are compared and some newly-considered similarities and PWFS advantages are presented. Techniques to extend PWFS's are offered: First, PWFS's can be extended to more pixels in the image by tiling pyramids contiguously. Second, pyramids, which are difficult to manufacture, can be replaced by less expensive lenslet arrays. An approach is outlined to convert existing SH WFS's to PWFS's for easy evaluation of PWFS's. Also, a demonstration of PWFS's in sensing varying amounts of an aberration is presented. For ELT's, the finite altitude and finite thickness of LGS's means that the LGS will appear elongated from the viewpoint of subapertures not directly under the telescope. Two techniques for dealing with LGS spot elongation in SH WFS's are presented. One method assumes that the laser will be pulsed and uses a segmented micro

  18. Scaling in large Prandtl number turbulent thermal convection

    CERN Document Server

    Dubrulle, B

    2011-01-01

    We study the scaling properties of heat transfer $Nu$ in turbulent thermal convection at large Prandtl number $Pr$ using a quasi-linear theory. We show that two regimes arise, depending on the Reynolds number $Re$. At low Reynolds number, $Nu Pr^{-1/2}$ and $Re$ are a function of $Ra Pr^{-3/2}$. At large Reynolds number $Nu Pr^{1/3}$ and $Re Pr$ are function only of $Ra Pr^{2/3}$ (within logarithmic corrections). In practice, since $Nu$ is always close to $Ra^{1/3}$, this corresponds to a much weaker dependence of the heat transfer in the Prandtl number at low Reynolds number than at large Reynolds number. This difference may solve an existing controversy between measurements in SF6 (large $Re$) and in alcohol/water (lower $Re$). We link these regimes with a possible global bifurcation in the turbulent mean flow. We further show how a scaling theory could be used to describe these two regimes through a single universal function. This function presents a bimodal character for intermediate range of Reynolds num...

  19. A new concept for large deformable mirrors for extremely large telescopes

    Science.gov (United States)

    Andersen, Torben; Owner-Petersen, Mette; Ardeberg, Arne; Korhonen, Tapio

    2006-06-01

    For extremely large telescopes, there is strong need for thin deformable mirrors in the 3-4 m class. So far, feasibility of such mirrors has not been demonstrated. Extrapolation from existing techniques suggests that the mirrors could be highly expensive. We give a progress report on a study of an approach for construction of large deformable mirrors with a moderate cost. We have developed low-cost actuators and deflection sensors that can absorb mounting tolerances in the millimeter range, and we have tested prototypes in the laboratory. Studies of control laws for mirrors with thousands of sensors and actuators are in good progress and simulations have been carried out. Manufacturing of thin, glass mirror blanks is being studied and first prototypes have been produced by a slumping technique. Development of polishing procedures for thin mirrors is in progress.

  20. Explicit and probabilistic constructions of distance graphs with small clique numbers and large chromatic numbers

    Science.gov (United States)

    Kupavskii, A. B.

    2014-02-01

    We study distance graphs with exponentially large chromatic numbers and without k-cliques, that is, complete subgraphs of size k. Explicit constructions of such graphs use vectors in the integer lattice. For a large class of graphs we find a sharp threshold for containing a k-clique. This enables us to improve the lower bounds for the maximum of the chromatic numbers of such graphs. We give a new probabilistic approach to the construction of distance graphs without k-cliques, and this yields better lower bounds for the maximum of the chromatic numbers for large k.

  1. Exotic baryon multiplets at large number of colours

    CERN Document Server

    Diakonov, D; Diakonov, Dmitri; Petrov, Victor

    2003-01-01

    We generalize the usual octet, decuplet and exotic antidecuplet and higher baryon multiplets to any number of colours Nc. We show that the multiplets fall into a sequence of bands with O(1/Nc) splittings inside the band and O(1) splittings between the bands characterized by "exoticness", that is the number of extra quark-antiquark pairs needed to compose the multiplet. Unless exoticness becomes very large, all multiplets can be reliably described at large Nc as collective rotational excitations of a chiral soliton.

  2. The Law of Large Numbers for the Free Multiplicative Convolution

    DEFF Research Database (Denmark)

    Haagerup, Uffe; Möller, Sören

    2013-01-01

    In classical probability the law of large numbers for the multiplicative convolution follows directly from the law for the additive convolution. In free probability this is not the case. The free additive law was proved by D. Voiculescu in 1986 for probability measures with bounded support...... for the case of bounded support. In contrast to the classical multiplicative convolution case, the limit measure for the free multiplicative law of large numbers is not a Dirac measure, unless the original measure is a Dirac measure. We also show that the mean value of lnx is additive with respect to the free...

  3. Non-Maxwellian Molecular Velocity Distribution at Large Knudsen Numbers

    OpenAIRE

    Shim, Jae Wan

    2012-01-01

    We have derived a non-Maxwellian molecular velocity distribution at large Knudsen numbers for ideal gas. This distribution approaches Maxwellian molecular velocity distribution as the Knudsen number approaches zero. We have found that the expectation value of the square of velocity is the same in the non-Maxwellian molecular velocity distribution as it is in the Maxwellian distribution; however, the expectation value of the speed is not the same.

  4. Laws of large numbers for ratios of uniform random variables

    Directory of Open Access Journals (Sweden)

    Adler André

    2015-09-01

    Full Text Available Let {Xnn n ≥ 1} and {Yn, n ≥ 1} be two sequences of uniform random variables. We obtain various strong and weak laws of large numbers for the ratio of these two sequences. Even though these are uniform and naturally bounded random variables the ratios are not bounded and have an unusual behaviour creating Exact Strong Laws.

  5. The strong law of large numbers for random quadratic forms

    NARCIS (Netherlands)

    Mikosch, T

    1996-01-01

    The paper establishes strong laws of large numbers for the quadratic forms [GRAPHICS] and the bilinear forms [GRAPHICS] where X = (X(n)) is a sequence of independent random variables and Y = (Y-n) is an independent copy of it. In the case of independent identically distributed symmetric p-stable ran

  6. The strong law of large numbers for random quadratic forms

    NARCIS (Netherlands)

    Mikosch, T

    1996-01-01

    The paper establishes strong laws of large numbers for the quadratic forms [GRAPHICS] and the bilinear forms [GRAPHICS] where X = (X(n)) is a sequence of independent random variables and Y = (Y-n) is an independent copy of it. In the case of independent identically distributed symmetric p-stable

  7. Large negative numbers in number theory, thermodynamics, information theory, and human thermodynamics

    Science.gov (United States)

    Maslov, V. P.

    2016-10-01

    We show how the abstract analytic number theory of Maier, Postnikov, and others can be extended to include negative numbers and apply this to thermodynamics, information theory, and human thermodynamics. In particular, we introduce a certain large number N 0 on the "zero level" with a high multiplicity number q i ≫ 1 related to the physical concept of gap in the spectrum. We introduce a general notion of "hole," similar to the Dirac hole in physics, in the theory. We also consider analogs of thermodynamical notions in human thermodynamics, in particular, in connection with the role of the individual in history.

  8. Fake Superpotential for Large and Small Extremal Black Holes

    CERN Document Server

    Andrianopoli, L; Ferrara, S; Trigiante, M

    2010-01-01

    We consider the fist order, gradient-flow, description of the scalar fields coupled to spherically symmetric, asymptotically flat black holes in extended supergravities. Using the identification of the fake superpotential with Hamilton's characteristic function we clarify some of its general properties, showing in particular (besides reviewing the issue of its duality invariance) that W has the properties of a Liapunov's function, which implies that its extrema (associated with the horizon of extremal black holes) are asymptotically stable equilibrium points of the corresponding first order dynamical system (in the sense of Liapunov). Moreover, we show that the fake superpotential W has, along the entire radial flow, the same flat directions which exist at the attractor point. This allows to study properties of the ADM mass also for small black holes where in fact W has no critical points at finite distance in moduli space. In particular the W function for small non-BPS black holes can always be computed anal...

  9. California Extremely Large Telescope : conceptual design for a thirty-meter telescope

    Science.gov (United States)

    Following great success in the creation of the Keck Observatory, scientists at the California Institute of Technology and the University of California have begun to explore the scientific and technical prospects for a much larger telescope. The Keck telescopes will remain the largest telescopes in the world for a number of years, with many decades of forefront research ahead after that. Though these telescopes have produced dramatic discoveries, it is already clear that even larger telescopes must be built if we are to address some of the most profound questions about our universe. The time required to build a larger telescope is approximately ten years, and the California community is presently well-positioned to begin its design and construction. The same scientists who conceived, led the design, and guided the construction of the Keck Observatory have been intensely engaged in a study of the prospects for an extremely large telescope. Building on our experience with the Keck Observatory, we have concluded that the large telescope is feasible and is within the bounds set by present-day technology. Our reference telescope has a diameter of 30 meters, the largest size we believe can be built with acceptable risk. The project is currently designated the California Extremely Large Telescope (CELT).

  10. Synchronizing large number of nonidentical oscillators with small coupling

    Science.gov (United States)

    Wu, Ye; Xiao, Jinghua; Hu, Gang; Zhan, Meng

    2012-02-01

    The topic of synchronization of oscillators has attracted great and persistent interest, and all previous conclusions and intuitions have convinced that large coupling is required for synchronizing a large number of coupled nonidentical oscillators. Here the influences of different spatial frequency distributions on the efficiency of frequency synchronization are investigated by studying arrays of coupled oscillators with diverse natural frequency distributions. A universal log-normal distribution of critical coupling strength Kc for synchronization irrespective of the initial natural frequency is found. In particular, a physical quantity "roughness"R of spatial frequency configuration is defined, and it is found that the efficiency of synchronization increases monotonously with R. For large R we can reach full synchronization of arrays with a large number of oscillators at finite Kc. Two typical kinds of synchronization, the "multiple-clustering" one and the "single-center-clustering" one, are identified for small and large R's, respectively. The mechanism of the latter type is the key reason for synchronizing long arrays with finite Kc.

  11. New challenges for Adaptive Optics Extremely Large Telescopes

    CERN Document Server

    Le Louarn, M; Sarazin, M; Tokovinin, A

    2000-01-01

    The performance of an adaptive optics (AO) system on a 100m diameter ground based telescope working in the visible range of the spectrum is computed using an analytical approach. The target Strehl ratio of 60% is achieved at 0.5um with a limiting magnitude of the AO guide source near R~10, at the cost of an extremely low sky coverage. To alleviate this problem, the concept of tomographic wavefront sensing in a wider field of view using either natural guide stars (NGS) or laser guide stars (LGS) is investigated. These methods use 3 or 4 reference sources and up to 3 deformable mirrors, which increase up to 8-fold the corrected field size (up to 60\\arcsec at 0.5 um). Operation with multiple NGS is limited to the infrared (in the J band this approach yields a sky coverage of 50% with a Strehl ratio of 0.2). The option of open-loop wavefront correction in the visible using several bright NGS is discussed. The LGS approach involves the use of a faint (R ~22) NGS for low-order correction, which results in a sky cov...

  12. Oscillations of a Simple Pendulum with Extremely Large Amplitudes

    Science.gov (United States)

    Butikov, Eugene I.

    2012-01-01

    Large oscillations of a simple rigid pendulum with amplitudes close to 180[degrees] are treated on the basis of a physically justified approach in which the cycle of oscillation is divided into several stages. The major part of the almost closed circular path of the pendulum is approximated by the limiting motion, while the motion in the vicinity…

  13. Dissecting the Genetic Basis of Extremely Large Grain Shape in Rice Cultivar ‘JZ1560'

    Institute of Scientific and Technical Information of China (English)

    Jie-Zheng Ying; Ji-Ping Gao; Jun-Xiang Shan; Mei-Zhen Zhu; Min Shi; Hong-Xuan Lin

    2012-01-01

    Rice grain shape,grain length (GL),width (GW),thickness (GT) and length-to-width ratio (LWR),are usually controlled by multiple quantitative trait locus (QTL).To elucidate the genetic basis of extremely large grain shape,QTL analysis was performed using an F2 population derived from a cross between a japonica cultivar ‘JZ1560' (extremely large grain) and a contrasting indica cultivar ‘FAZ1' (small grain).A total number of 24 QTLs were detected on seven different chromosomes.QTLs for GL,GW,GT and LWR explained 11.6%,95.62%,91.5% and 89.9% of total phenotypic variation,respectively.Many QTLs pleiotropically controlled different grain traits,contributing complex traits correlation.GW2 and qSW5/GW5,which have been cloned previously to control GW,showed similar chromosomal locations with qGW2-1/qGT2-1/qLWR2-2 and qGW5-2/qLWR5-1 and should be the right candidate genes.Plants pyramiding GW2 and qSW5/GW5 showed a significant increase in GW compared with those carrying one of the two major QTLs.Furthermore,no significant QTL interaction was observed between GW2 and qSW5/GW5.These results suggested that GW2 and qSW5/GW5 might work in independent pathways to regulate grain traits.‘JZ1560' alleles underlying all QTLs contributed an increase in GW and GT and the accumulation of additive effects generates the extremely large grain shape in ‘JZ1560'.

  14. Rotating thermal convection at very large Rayleigh numbers

    Science.gov (United States)

    Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard

    2016-11-01

    The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.

  15. Analog Magnitudes Support Large Number Ordinal Judgments in Infancy.

    Science.gov (United States)

    vanMarle, Kristy; Mou, Yi; Seok, Jin H

    2016-01-01

    Few studies have explored the source of infants' ordinal knowledge, and those that have are equivocal regarding the underlying representational system. The present study sought clear evidence that the approximate number system, which underlies children's cardinal knowledge, may also support ordinal knowledge in infancy; 10 - to 12-month-old infants' were tested with large sets (>3) in an ordinal choice task in which they were asked to choose between two hidden sets of food items. The difficulty of the comparison varied as a function of the ratio between the sets. Infants reliably chose the greater quantity when the sets differed by a 2:3 ratio (4v6 and 6v9), but not when they differed by a 3:4 ratio (6v8) or a 7:8 ratio (7v8). This discrimination function is consistent with previous studies testing the precision of number and time representations in infants of roughly this same age, thus providing evidence that the approximate number system can support ordinal judgments in infancy. The findings are discussed in light of recent proposals that different mechanisms underlie infants' reasoning about small and large numbers.

  16. Design and preliminary test of precision segment positioning actuator for the California Extremely Large Telescope

    Science.gov (United States)

    Lorell, Kenneth R.; Aubrun, Jean-Noel; Clappier, Robert R.; Shelef, Ben; Shelef, Gad

    2003-01-01

    In order for the California Extremely Large Telescope (CELT) to achieve the required optical performance, each of its 1000 primary mirror segments must be positioned relative to adjacent segments with nanometer-level accuracy. This can be accomplished using three actuators for each segment to actively control the segment in tip, tilt, and piston. The Keck telescopes utilize a segmented primary mirror similar to CELT employing a highly successful actuator design. However, because of its size and the shear number of actuators (3000 vs. 108 for Keck), CELT will require a different design. Sensitivity to wind loads and structural vibrations, the large dynamic range, low operating power, and extremely reliable operation, all achieved at an affordable unit cost, are the most demanding design requirements. This paper examines four actuator concepts and presents a trade-off between them. The concept that best met the CELT requirements is described along with an analysis of its performance. The concept is based on techniques that achieve the required accuracy while providing a substantial amount of vibration attenuation and damping. A prototype actuator has been built to validate this concept. Preliminary tests confirm predicted behavior and future tests will establish a sound baseline for final design and production.

  17. Large multi-allelic copy number variations in humans

    Science.gov (United States)

    Handsaker, Robert E.; Van Doren, Vanessa; Berman, Jennifer R.; Genovese, Giulio; Kashin, Seva; Boettger, Linda M.; McCarroll, Steven A.

    2015-01-01

    Thousands of genome segments appear to be present in widely varying copy number in different human genomes. We developed ways to use increasingly abundant whole genome sequence data to identify the copy numbers, alleles and haplotypes present at most large, multi-allelic CNVs (mCNVs). We analyzed 849 genomes sequenced by the 1000 Genomes Project to identify most large (>5 kb) mCNVs, including 3,878 duplications, of which 1,356 appear to have three or more segregating alleles. We find that mCNVs give rise to most human gene-dosage variation – exceeding sevenfold the contribution of deletions and biallelic duplications – and that this variation in gene dosage generates abundant variation in gene expression. We describe “runaway duplication haplotypes” in which genes, including HPR and ORM1, have mutated to high copy number on specific haplotypes. We describe partially successful initial strategies for analyzing mCNVs via imputation and provide an initial data resource to support such analyses. PMID:25621458

  18. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  19. Hydrological extremes : Improving simulations of flood and drought in large river basins

    NARCIS (Netherlands)

    Wanders, N.

    2015-01-01

    Hydrological extremes regularly occur in all regions of the world and as such have large impacts on society. Floods and drought are the most severe hydrological extremes, in terms of their societal impact and potential economic damage. These events are amongst the most costly natural disasters, due

  20. Compact high-resolution spectrographs for large and extremely large telescopes: using the diffraction limit

    Science.gov (United States)

    Robertson, J. Gordon; Bland-Hawthorn, Joss

    2012-09-01

    As telescopes get larger, the size of a seeing-limited spectrograph for a given resolving power becomes larger also, and for ELTs the size will be so great that high resolution instruments of simple design will be infeasible. Solutions include adaptive optics (but not providing full correction for short wavelengths) or image slicers (which give feasible but still large instruments). Here we develop the solution proposed by Bland-Hawthorn and Horton: the use of diffraction-limited spectrographs which are compact even for high resolving power. Their use is made possible by the photonic lantern, which splits a multi-mode optical fiber into a number of single-mode fibers. We describe preliminary designs for such spectrographs, at a resolving power of R ~ 50,000. While they are small and use relatively simple optics, the challenges are to accommodate the longest possible fiber slit (hence maximum number of single-mode fibers in one spectrograph) and to accept the beam from each fiber at a focal ratio considerably faster than for most spectrograph collimators, while maintaining diffraction-limited imaging quality. It is possible to obtain excellent performance despite these challenges. We also briefly consider the number of such spectrographs required, which can be reduced by full or partial adaptive optics correction, and/or moving towards longer wavelengths.

  1. Improving CASINO performance for models with large number of electrons

    Energy Technology Data Exchange (ETDEWEB)

    Anton, L; Alfe, D; Hood, R Q; Tanqueray, D

    2009-05-13

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation.

  2. Bubble deformations in corrugated microchannels at large capillary numbers

    Science.gov (United States)

    Cubaud, Thomas; Sauzade, Martin

    2016-11-01

    Multiphase flows in confined microgeometries display a variety of intriguing dynamics. Here, we experimentally examine trains of monodisperse gas bubbles of different sizes and concentrations passing through a series of extensions and constrictions from low to large capillary numbers. Using highly viscous carrier fluids, we show in particular that bubbles strongly deform in velocity fields set with the channel geometry. We measure the instantaneous front and rear velocities of periodically distorted capillary surfaces and develop functional relationships for predicting the morphology of multiphase flow patterns at the pore scale. This work is supported by NSF (CBET-1150389).

  3. Supervision in Factor Models Using a Large Number of Predictors

    DEFF Research Database (Denmark)

    Boldrini, Lorenzo; Hillebrand, Eric Tobias

    In this paper we investigate the forecasting performance of a particular factor model (FM) in which the factors are extracted from a large number of predictors. We use a semi-parametric state-space representation of the FM in which the forecast objective, as well as the factors, is included.......g. a standard dynamic factor model with separate forecast and state equations....... in the state vector. The factors are informed of the forecast target (supervised) through the state equation dynamics. We propose a way to assess the contribution of the forecast objective on the extracted factors that exploits the Kalman filter recursions. We forecast one target at a time based...

  4. Evolution of precipitation extremes in two large ensembles of climate simulations

    Science.gov (United States)

    Martel, Jean-Luc; Mailhot, Alain; Talbot, Guillaume; Brissette, François; Ludwig, Ralf; Frigon, Anne; Leduc, Martin; Turcotte, Richard

    2017-04-01

    Recent studies project significant changes in the future distribution of precipitation extremes due to global warming. It is likely that extreme precipitation intensity will increase in a future climate and that extreme events will be more frequent. In this work, annual maxima daily precipitation series from the Canadian Earth System Model (CanESM2) 50-member large ensemble (spatial resolution of 2.8°x2.8°) and the Community Earth System Model (CESM1) 40-member large ensemble (spatial resolution of 1°x1°) are used to investigate extreme precipitation over the historical (1980-2010) and future (2070-2100) periods. The use of these ensembles results in respectively 1 500 (30 years x 50 members) and 1200 (30 years x 40 members) simulated years over both the historical and future periods. These large datasets allow the computation of empirical daily extreme precipitation quantiles for large return periods. Using the CanESM2 and CESM1 large ensembles, extreme daily precipitation with return periods ranging from 2 to 100 years are computed in historical and future periods to assess the impact of climate change. Results indicate that daily precipitation extremes generally increase in the future over most land grid points and that these increases will also impact the 100-year extreme daily precipitation. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety. Estimated increases in precipitation associated to very extreme precipitation events (e.g. 100 years) will drastically change the likelihood of flooding and their extent in future climate. These results, although interesting, need to be extended to sub-daily durations, relevant for urban flooding protection and urban infrastructure design (e.g. sewer networks, culverts). Models and simulations at finer spatial and temporal resolution are therefore needed.

  5. Extreme ultraviolet spectroscopy and atomic models of highly charged heavy ions in the Large Helical Device

    Science.gov (United States)

    Suzuki, C.; Murakami, I.; Koike, F.; Tamura, N.; Sakaue, H. A.; Morita, S.; Goto, M.; Kato, D.; Ohashi, H.; Higashiguchi, T.; Sudo, S.; O'Sullivan, G.

    2017-01-01

    We report recent results of extreme ultraviolet (EUV) spectroscopy of highly charged heavy ions in plasmas produced in the Large Helical Device (LHD). The LHD is an ideal source of experimental databases of EUV spectra because of high brightness and low opacity, combined with the availability of pellet injection systems and reliable diagnostic tools. The measured heavy elements include tungsten, tin, lanthanides and bismuth, which are motivated by ITER as well as a variety of plasma applications such as EUV lithography and biological microscopy. The observed spectral features drastically change between quasicontinuum and discrete depending on the plasma temperature, which leads to some new experimental identifications of spectral lines. We have developed collisional-radiative models for some of these ions based on the measurements. The atomic number dependence of the spectral feature is also discussed.

  6. Hydrological extremes : Improving simulations of flood and drought in large river basins

    OpenAIRE

    N. Wanders

    2015-01-01

    Hydrological extremes regularly occur in all regions of the world and as such have large impacts on society. Floods and drought are the most severe hydrological extremes, in terms of their societal impact and potential economic damage. These events are amongst the most costly natural disasters, due to their often large spatial extent and high societal impact. The main objective of this thesis is: To reduce uncertainty in simulations, reanalysis, monitoring, forecasting and projections of hydr...

  7. Extreme value prediction of the wave-induced vertical bending moment in large container ships

    DEFF Research Database (Denmark)

    Andersen, Ingrid Marie Vincent; Jensen, Jørgen Juncher

    2015-01-01

    in the present paper is based on time series of full scale measurements from three large container ships of 8600, 9400 and 14000 TEU. When carrying out the extreme value estimation the peak-over-threshold (POT) method combined with an appropriate extreme value distribution is applied. The choice of a proper...... increase the extreme hull girder response significantly. Focus in the present paper is on the influence of the hull girder flexibility on the extreme response amidships, namely the wave-induced vertical bending moment (VBM) in hogging, and the prediction of the extreme value of the same. The analysis...... threshold level as well as the statistical correlation between clustered peaks influence the extreme value prediction and are taken into consideration in the present paper....

  8. Electrohydrodynamic deformation of drops and bubbles at large Reynolds numbers

    Science.gov (United States)

    Schnitzer, Ory

    2015-11-01

    In Taylor's theory of electrohydrodynamic drop deformation by a uniform electric field, inertia is neglected at the outset, resulting in fluid velocities that scale with E2, E being the applied-field magnitude. When considering strong fields and low viscosity fluids, the Reynolds number predicted by this scaling may actually become large, suggesting the need for a complementary large-Reynolds-number analysis. Balancing viscous and electrical stresses reveals that the velocity scales with E 4 / 3. Considering a gas bubble, the external flow is essentially confined to two boundary layers propagating from the poles to the equator, where they collide to form a radial jet. Remarkably, at leading order in the Capillary number the unique scaling allows through application of integral mass and momentum balances to obtain a closed-form expression for the O (E2) bubble deformation. Owing to a concentrated pressure load at the vicinity of the collision region, the deformed profile features an equatorial dimple which is non-smooth on the bubble scale. The dynamical importance of internal circulation in the case of a liquid drop leads to an essentially different deformation mechanism. This is because the external boundary layer velocity attenuates at a short distance from the interface, while the internal boundary-layer matches with a Prandtl-Batchelor (PB) rotational core. The dynamic pressure associated with the internal circulation dominates the interfacial stress profile, leading to an O (E 8 / 3) deformation. The leading-order deformation can be readily determined, up to the PB constant, without solving the circulating boundary-layer problem. To encourage attempts to verify this new scaling, we shall suggest a favourable experimental setup in which inertia is dominant, while finite-deformation, surface-charge advection, and gravity effects are negligible.

  9. Compact high-resolution spectrographs for large and extremely large telescopes: using the diffraction limit

    CERN Document Server

    Robertson, J Gordon

    2012-01-01

    As telescopes get larger, the size of a seeing-limited spectrograph for a given resolving power becomes larger also, and for ELTs the size will be so great that high resolution instruments of simple design will be infeasible. Solutions include adaptive optics (but not providing full correction for short wavelengths) or image slicers (which give feasible but still large instruments). Here we develop the solution proposed by Bland-Hawthorn and Horton: the use of diffraction-limited spectrographs which are compact even for high resolving power. Their use is made possible by the photonic lantern, which splits a multi-mode optical fiber into a number of single-mode fibers. We describe preliminary designs for such spectrographs, at a resolving power of R ~ 50,000. While they are small and use relatively simple optics, the challenges are to accommodate the longest possible fiber slit (hence maximum number of single-mode fibers in one spectrograph) and to accept the beam from each fiber at a focal ratio considerably ...

  10. Large deviations of the shifted index number in the Gaussian ensemble

    Science.gov (United States)

    Pérez Castillo, Isaac

    2016-06-01

    We show that, using the Coulomb fluid approach, we are able to derive a rate function \\Psi(c,x) of two variables that captures: (i) the large deviations of bulk eigenvalues; (ii) the large deviations of extreme eigenvalues (both left and right large deviations); (iii) the statistics of the fraction c of eigenvalues to the left of a position x. Thus, \\Psi(c,x) explains the full order statistics of the eigenvalues of large random Gaussian matrices as well as the statistics of the shifted index number. All our analytical findings are thoroughly compared with Monte Carlo simulations, obtaining excellent agreement. A summary of preliminary results has already been presented in Pérez Castillo (2014 Phys. Rev. E 90 040102) in the context of one-dimensional trapped spinless fermions in a harmonic potential.

  11. Creepers: Real quadratic orders with large class number

    Science.gov (United States)

    Patterson, Roger

    2007-03-01

    Shanks's sequence of quadratic fields Q(sqrt{S_{n}}) where S_{n}=(2^n+1)^2 + 2^{n+2} instances a class of quadratic fields for which the class number is large and, therefore, the continued fraction period is relatively short. Indeed, that period length increases linearly with n, that is: in arithmetic progression. The fields have regulator O(n^2). In the late nineties, these matters intrigued Irving Kaplansky, and led him to compute period length of the square root of sequences a^2x^{2n}+bx^{n}+c for integers a, b, c, and x. In brief, Kap found unsurprisingly that, generically, triples (a,b,c) are `leapers': they yield sequences with period length increasing at exponential rate. But there are triples yielding sequences with constant period length, Kap's `sleepers'. Finally, there are triples, as exemplified by the Shanks's sequence, for which the period lengths increase in arithmetic progression. Felicitously, Kaplansky called these `creepers'. It seems that the sleepers and creepers are precisely those for which one is able to detail the explicit continued fraction expansion for all n. Inter alia, this thesis noticeably extends the known classes of creepers and finds that not all are `kreepers' (of the shape identified by Kaplansky) and therefore not of the shape of examples studied by earlier authors looking for families of quadratic number fields with explicitly computable unit and of relatively large regulator. The work of this thesis includes the discovery of old and new families of hyperelliptic curves of increasing genus g and torsion divisor of order O(g^2). It follows that the apparent trichotomy leaper/sleeper/creeper coincides with the folk belief that the just-mentioned torsion is maximum possible.

  12. Methods of analysis and number of replicates for trials with large numbers of soybean genotypes

    Directory of Open Access Journals (Sweden)

    Gilvani Matei

    Full Text Available ABSTRACT: The aim of this study was to evaluate the experimental precision of different methods of statistical analysis for trials with large numbers of soybean genotypes, and their relationship with the number of replicates. Soybean yield data (nine trials; 324 genotypes; 46 cultivars; 278 lines; agricultural harvest of 2014/15 were used. Two of these trials were performed at the same location, side by side, forming a trial with six replicates. Each trial was analyzed by the randomized complete block, triple lattice design, and use of the Papadakis method. The selective accuracy, least significant difference, and Fasoulas differentiation index were estimated, and model assumptions were tested. The resampling method was used to study the influence of the number of replicates, by varying the number of blocks and estimating the precision measurements. The experimental precision indicators of the Papadakis method are more favorable as compared to the randomized complete block design and triple lattice. To obtain selective accuracy above the high experimental precision range in trials with 324 soybean genotypes, two repetitions can be used, and data can be analyzed using the randomized complete block design or Papadakis method.

  13. Generating extreme weather event sets from very large ensembles of regional climate models

    Science.gov (United States)

    Massey, Neil; Guillod, Benoit; Otto, Friederike; Allen, Myles; Jones, Richard; Hall, Jim

    2015-04-01

    Generating extreme weather event sets from very large ensembles of regional climate models Neil Massey, Benoit P. Guillod, Friederike E. L. Otto, Myles R. Allen, Richard Jones, Jim W. Hall Environmental Change Institute, University of Oxford, Oxford, UK Extreme events can have large impacts on societies and are therefore being increasingly studied. In particular, climate change is expected to impact the frequency and intensity of these events. However, a major limitation when investigating extreme weather events is that, by definition, only few events are present in observations. A way to overcome this issue it to use large ensembles of model simulations. Using the volunteer distributed computing (VDC) infrastructure of weather@home [1], we run a very large number (10'000s) of RCM simulations over the European domain at a resolution of 25km, with an improved land-surface scheme, nested within a free-running GCM. Using VDC allows many thousands of climate model runs to be computed. Using observations for the GCM boundary forcings we can run historical "hindcast" simulations over the past 100 to 150 years. This allows us, due to the chaotic variability of the atmosphere, to ascertain how likely an extreme event was, given the boundary forcings, and to derive synthetic event sets. The events in these sets did not actually occur in the observed record but could have occurred given the boundary forcings, with an associated probability. The event sets contain time-series of fields of meteorological variables that allow impact modellers to assess the loss the event would incur. Projections of events into the future are achieved by modelling projections of the sea-surface temperature (SST) and sea-ice boundary forcings, by combining the variability of the SST in the observed record with a range of warming signals derived from the varying responses of SSTs in the CMIP5 ensemble to elevated greenhouse gas (GHG) emissions in three RCP scenarios. Simulating the future with a

  14. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  15. First Contact with Astronomy for a Large Number of Pupils

    Science.gov (United States)

    Ros, Rosa M.

    The Spanish Royal Society of Physics (RSEF) co-operates with several European institutions to promote Physics and Astronomy in schools through the project ""Fisica en Acción"". This project started in 2000 integrated with the project ""Physics on Stage"" created by CERN ESA and ESO. ""Fisica en Accion"" is a Spanish competition bringing together a group of teachers in a common endeavour: * showing ""physics demonstrations"" to general audiences * engaging pedagogical presentations to introduce science into the classroom. The national final event of this competition takes place annually in a science museum during one weekend (entrance is free). The Science Fair is especially well received by visitors who can ask the demonstrators-teachers questions. Younger visitors enjoy experimenting for themselves. After the first year the RSEF introduced special prizes to encourage schools to participate in astronomical categories. The ""Centro de Astrobiologia de Madrid"" gave a cash prize and a visit to their headquarters to the winners. The ""Instituto Astrofísico de Canarias"" offered a prize of a trip to its observatories. In summary the astronomical elements of ""Fisica en Acción"" stimulate the teachers and students' interest in international activities and has been the first contact with Astronomy for a large number of pupils.

  16. Creepers: Real quadratic orders with large class number

    CERN Document Server

    Patterson, R

    2007-01-01

    Shanks's sequence of quadratic fields $\\Q(\\sqrt{S_{n}})$ where $S_{n}=(2^n+1)^2 + 2^{n+2}$ instances a class of quadratic fields for which the class number is large and, therefore, the continued fraction period is relatively short. Indeed, that period length increases linearly with $n$, that is: in arithmetic progression. The fields have regulator $O(n^2)$. In the late nineties, these matters intrigued Irving Kaplansky, and led him to compute period length of the square root of sequences $a^2x^{2n}+bx^{n}+c$ for integers $a$, $b$, $c$, and $x$. In brief, Kap found unsurprisingly that, generically, triples $(a,b,c)$ are `leapers': they yield sequences with period length increasing at exponential rate. But there are triples yielding sequences with constant period length, Kap's `sleepers'. Finally, there are triples, as exemplified by the Shanks's sequence, for which the period lengths increase in arithmetic progression. Felicitously, Kaplansky called these `creepers'. It seems that the sleepers and creepers are...

  17. Viscous range of turbulent scalar of large Prandtl number

    Science.gov (United States)

    Qian, J.

    1995-02-01

    The analytical theory of a turbulent scalar, developed in previous papers is extended to the case of large Prandtl number. The fluctuation character of the least principal rate of strain gamma has an important effect upon the scalar spectrum. The scalar variance spectrum in the viscous range is F(k) = 4.472 (nu/epsilon)(exp 1/2) chi k(exp -1) H(x), X is identical to (k/k(sub b))(exp 2), H(x) is a dimensionless universal function and is determined by solving numerically the closed spectral dynamical equations. A simple fitting formula of the numerical result is H(x) = 0.7687 exp(- 3.79x) + 0.2313 exp(-11.13 x), which corresponds a two-values fluctuation model of gamma. Here upsilon is the kinematic viscosity, k(sub b) is identical to (epsilon/upsilon mu(exp 2))(exp 1/4) is the Batchelor wavenumber, mu is the scalar diffusivity, and epsilon and eni are respectively the energy and variance dissipation rates.

  18. In vitro leakage associated with three root-filling techniques in large and extremely large root canals.

    Science.gov (United States)

    Mente, Johannes; Werner, Sabine; Koch, Martin Jean; Henschel, Volkmar; Legner, Milos; Staehle, Hans Joerg; Friedman, Shimon

    2007-03-01

    This study assessed the apical leakage of ultrasonically condensed root fillings in extremely large canals, compared to cold lateral condensation and thermoplastic compaction. Ninety single-rooted teeth were used. In 45 teeth canals were enlarged to size 70 (large). The remaining 45 canals were enlarged to size 140 (extremely large). Each set of teeth was subdivided into three root-filling groups (n = 15): (1) cold lateral condensation (LC); (2) thermoplastic compaction (TC); and (3) ultrasonic lateral condensation (UC). Teeth in all six subgroups were subjected to drawing ink penetration, cleared, and evaluated for linear apical dye leakage. Significantly deeper dye penetration (p < 0.04, Wilcoxon rank-sum test) was observed for LC than for UC. TC did not differ significantly from LC and UC. Dye penetration was significantly deeper (p < 0.0001) in canals enlarged to size 140 than to size 70, independent of root-filling method. Apical leakage associated with ultrasonically condensed root fillings was less than that with cold lateral condensation. It was consistently greater in extremely large canals than that in large ones.

  19. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-03-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.

  20. Trends in Mediterranean gridded temperature extremes and large-scale circulation influences

    Directory of Open Access Journals (Sweden)

    D. Efthymiadis

    2011-08-01

    Full Text Available Two recently-available daily gridded datasets are used to investigate trends in Mediterranean temperature extremes since the mid-20th century. The underlying trends are found to be generally consistent with global trends of temperature and their extremes: cold extremes decrease and warm/hot extremes increase. This consistency is better manifested in the western part of the Mediterranean where changes are most pronounced since the mid-1970s. In the eastern part, a cooling is observed, with a near reversal in the last two decades. This inter-basin discrepancy is clearer in winter, while in summer changes are more uniform and the west-east difference is restricted to the rate of increase of warm/hot extremes, which is higher in central and eastern parts of the Mediterranean over recent decades. Linear regression and correlation analysis reveals some influence of major large-scale atmospheric circulation patterns on the occurrence of these extremes – both in terms of trend and interannual variability. These relationships are not, however, able to account for the most striking features of the observations – in particular the intensification of the increasing trend in warm/hot extremes, which is most evident over the last 15–20 yr in the Central and Eastern Mediterranean.

  1. The large lungs of elite swimmers: an increased alveolar number?

    Science.gov (United States)

    Armour, J; Donnelly, P M; Bye, P T

    1993-02-01

    In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or

  2. The role of Natural Flood Management in managing floods in large scale basins during extreme events

    Science.gov (United States)

    Quinn, Paul; Owen, Gareth; ODonnell, Greg; Nicholson, Alex; Hetherington, David

    2016-04-01

    There is a strong evidence database showing the negative impacts of land use intensification and soil degradation in NW European river basins on hydrological response and to flood impact downstream. However, the ability to target zones of high runoff production and the extent to which we can manage flood risk using nature-based flood management solution are less known. A move to planting more trees and having less intense farmed landscapes is part of natural flood management (NFM) solutions and these methods suggest that flood risk can be managed in alternative and more holistic ways. So what local NFM management methods should be used, where in large scale basin should they be deployed and how does flow is propagate to any point downstream? Generally, how much intervention is needed and will it compromise food production systems? If we are observing record levels of rainfall and flow, for example during Storm Desmond in Dec 2015 in the North West of England, what other flood management options are really needed to complement our traditional defences in large basins for the future? In this paper we will show examples of NFM interventions in the UK that have impacted at local scale sites. We will demonstrate the impact of interventions at local, sub-catchment (meso-scale) and finally at the large scale. These tools include observations, process based models and more generalised Flood Impact Models. Issues of synchronisation and the design level of protection will be debated. By reworking observed rainfall and discharge (runoff) for observed extreme events in the River Eden and River Tyne, during Storm Desmond, we will show how much flood protection is needed in large scale basins. The research will thus pose a number of key questions as to how floods may have to be managed in large scale basins in the future. We will seek to support a method of catchment systems engineering that holds water back across the whole landscape as a major opportunity to management water

  3. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    Science.gov (United States)

    Yu, Lejiang; Zhong, Shiyuan; Pei, Lisi; Bian, Xindi; Heilman, Warren E.

    2016-04-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for severe flooding over a large region, little is known about how extreme precipitation events that cause flash flooding and occur at sub-daily time scales have changed over time. Here we use the observed hourly precipitation from the North American Land Data Assimilation System Phase 2 forcing datasets to determine trends in the frequency of extreme precipitation events of short (1 h, 3 h, 6 h, 12 h and 24 h) duration for the period 1979-2013. The results indicate an increasing trend in the central and eastern US. Over most of the western US, especially the Southwest and the Intermountain West, the trends are generally negative. These trends can be largely explained by the interdecadal variability of the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation (AMO), with the AMO making a greater contribution to the trends in both warm and cold seasons.

  4. Populating the Large-Wavevector Realm: Bloch Volume Plasmon Polaritons in Hyperbolic and Extremely Anisotropic Metamaterials

    DEFF Research Database (Denmark)

    Zhukovsky, Sergei; Babicheva, Viktoriia; Orlov, A. A.

    2014-01-01

    Optics of hyperbolic metamaterials is revisited in terms of large-wavevector waves, evanescent in isotropic media but propagating in presence of extreme anisotropy. Identifying the physical nature of these waves as Bloch volume plasmon polaritons, we derive their existence conditions and outline...... the strategy for tailoring their properties in multiscale metamaterials....

  5. Revisiting extreme storms of the past 100 years for future safety of large water management infrastructures

    Science.gov (United States)

    Chen, Xiaodong; Hossain, Faisal

    2016-07-01

    Historical extreme storm events are widely used to make Probable Maximum Precipitation (PMP) estimates, which form the cornerstone of large water management infrastructure safety. Past studies suggest that extreme precipitation processes can be sensitive to land surface feedback and the planetary warming trend, which makes the future safety of large infrastructures questionable given the projected changes in land cover and temperature in the coming decades. In this study, a numerical modeling framework was employed to reconstruct 10 extreme storms over CONUS that occurred during the past 100 years, which are used by the engineering profession for PMP estimation for large infrastructures such as dams. Results show that the correlation in daily rainfall for such reconstruction can range between 0.4 and 0.7, while the correlation for maximum 3-day accumulation (a standard period used in infrastructure design) is always above 0.5 for post-1948 storms. This suggests that current numerical modeling and reanalysis data allow us to reconstruct big storms after 1948 with acceptable accuracy. For storms prior to 1948, however, reconstruction of storms shows inconsistency with observations. Our study indicates that numerical modeling and data may not have advanced to a sufficient level to understand how such old storms (pre-1948) may behave in future warming and land cover conditions. However, the infrastructure community can certainly rely on the use of model reconstructed extreme storms of the 1948-present period to reassess safety of our large water infrastructures under assumed changes in temperature and land cover.

  6. An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes

    Directory of Open Access Journals (Sweden)

    Eduardo Magdaleno

    2009-12-01

    Full Text Available In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain: international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975. It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA. These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO problems in Extremely Large Telescopes (ELTs in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs. Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations.

  7. Monte Carlo modelling of multi-object adaptive optics performance on the European Extremely Large Telescope

    Science.gov (United States)

    Basden, A. G.; Morris, T. J.

    2016-12-01

    The performance of a wide-field adaptive optics (AO) system depends on input design parameters. Here we investigate the performance of a multi-object AO system design for the European Extremely Large Telescope, using an end-to-end Monte Carlo AO simulation tool, Durham adaptive optics simulation platform, with relevance for proposed instruments such as MOSAIC. We consider parameters such as the number of laser guide stars, sodium layer depth, wavefront sensor pixel scale, actuator pitch and natural guide star availability. We provide potential areas where costs savings can be made, and investigate trade-offs between performance and cost, and provide solutions that would enable such an instrument to be built with currently available technology. Our key recommendations include a trade-off for laser guide star wavefront sensor pixel scale of about 0.7 arcsec per pixel, and a field of view of at least 7 arcsec, that electron multiplying CCD technology should be used for natural guide star wavefront sensors even if reduced frame rate is necessary, and that sky coverage can be improved by a slight reduction in natural guide star sub-aperture count without significantly affecting tomographic performance. We find that AO correction can be maintained across a wide field of view, up to 7 arcmin in diameter. We also recommend the use of at least four laser guide stars, and include ground-layer and multi-object AO performance estimates.

  8. Controlling synchronization in large laser networks using number theory

    CERN Document Server

    Nixon, Micha; Ronen, Eitan; Friesem, Asher A; Davidson, Nir; Kanter, Ido

    2011-01-01

    Synchronization in networks with delayed coupling are ubiquitous in nature and play a key role in almost all fields of science including physics, biology, ecology, climatology and sociology. In general, the published works on network synchronization are based on data analysis and simulations, with little experimental verification. Here we develop and experimentally demonstrate various multi-cluster phase synchronization scenarios within coupled laser networks. Synchronization is controlled by the network connectivity in accordance to number theory, whereby the number of synchronized clusters equals the greatest common divisor of network loops. This dependence enables remote switching mechanisms to control the optical phase coherence among distant lasers by local network connectivity adjustments. Our results serve as a benchmark for a broad range of coupled oscillators in science and technology, and offer feasible routes to achieve multi-user secure protocols in communication networks and parallel distribution...

  9. Characterizations of Graphs Having Large Proper Connection Numbers

    Directory of Open Access Journals (Sweden)

    Lumduanhom Chira

    2016-05-01

    Full Text Available Let G be an edge-colored connected graph. A path P is a proper path in G if no two adjacent edges of P are colored the same. If P is a proper u − v path of length d(u, v, then P is a proper u − v geodesic. An edge coloring c is a proper-path coloring of a connected graph G if every pair u, v of distinct vertices of G are connected by a proper u − v path in G, and c is a strong proper-path coloring if every two vertices u and v are connected by a proper u− v geodesic in G. The minimum number of colors required for a proper-path coloring or strong proper-path coloring of G is called the proper connection number pc(G or strong proper connection number spc(G of G, respectively. If G is a nontrivial connected graph of size m, then pc(G ≤ spc(G ≤ m and pc(G = m or spc(G = m if and only if G is the star of size m. In this paper, we determine all connected graphs G of size m for which pc(G or spc(G is m − 1,m − 2 or m − 3.

  10. MAD about the Large Magellanic Cloud: preparing for the era of Extremely Large Telescopes

    CERN Document Server

    Fiorentino, G; Diolaiti, E; Valenti, E; Cignoni, M; Mackey, A D

    2011-01-01

    We present J, H, Ks photometry from the the Multi conjugate Adaptive optics Demonstrator (MAD), a visitor instrument at the VLT, of a resolved stellar population in a small crowded field in the bar of the Large Magellanic Cloud near the globular cluster NGC 1928. In a total exposure time of 6, 36 and 20 minutes, magnitude limits were achieved of J \\sim 20.5 mag, H \\sim 21 mag, and Ks \\sim20.5 mag respectively, with S/N> 10. This does not reach the level of the oldest Main Sequence Turnoffs, however the resulting Colour-Magnitude Diagrams are the deepest and most accurate obtained so far in the infrared for the LMC bar. We combined our photometry with deep optical photometry from the Hubble Space Telescope/Advanced Camera for Surveys, which is a good match in spatial resolution. The comparison between synthetic and observed CMDs shows that the stellar population of the field we observed is consistent with the star formation history expected for the LMC bar, and that all combinations of IJHKs filters can, with ...

  11. DETERMIN LARGE PRIME NUMBERS TO COMPUTE RSA SYSTEM PARAMETERS

    Directory of Open Access Journals (Sweden)

    Ioan Mang

    2008-05-01

    Full Text Available Cryptography, the secret writing, is probably same old as writing itself and has applications in data security insurance. There are cryptosystems where the encipher algorithm can be public. These are public key algorithms. Research on public key algorithms has been concerned with security aspects. The results of this research have induced sufficient confidence to apply public key cryptography a larger scale. The most used and checked public key-based cryptosystem was find by Rivest, Shamir and Adleman, so called RSA system. This paper shows the RSA algorithm. We have realised a program that is able to determine prime numbers with over 100 digits and compute RSA system parameters.

  12. Comparison of coronagraphs for high contrast imaging in the context of Extremely Large Telescopes

    CERN Document Server

    Martínez, P; Kasper, M; Cavarroc, C; Yaitskova, N; Fusco, T; Verinaud, C

    2008-01-01

    We compare coronagraph concepts and investigate their behavior and suitability for planet finder projects with Extremely Large Telescopes (ELTs, 30-42 meters class telescopes). For this task, we analyze the impact of major error sources that occur in a coronagraphic telescope (central obscuration, secondary support, low-order segment aberrations, segment reflectivity variations, pointing errors) for phase, amplitude and interferometric type coronagraphs. This analysis is performed at two different levels of the detection process: under residual phase left uncorrected by an eXtreme Adaptive Optics system (XAO) for a large range of Strehl ratio and after a general and simple model of speckle calibration, assuming common phase aberrations between the XAO and the coronagraph (static phase aberrations of the instrument) and non-common phase aberrations downstream of the coronagraph (differential aberrations provided by the calibration unit). We derive critical parameters that each concept will have to cope with by...

  13. Hall effect in the extremely large magnetoresistance semimetal WTe{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Yongkang, E-mail: ykluo@lanl.gov; Dai, Y. M.; Taylor, A. J.; Yarotski, D. A.; Prasankumar, R. P.; Thompson, J. D. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Li, H.; Miao, H.; Shi, Y. G. [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Science, Beijing 100190 (China); Ding, H. [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Science, Beijing 100190 (China); Collaborative Innovation Center of Quantum Matter, Beijing 100084 (China)

    2015-11-02

    We systematically measured the Hall effect in the extremely large magnetoresistance semimetal WTe{sub 2}. By carefully fitting the Hall resistivity to a two-band model, the temperature dependencies of the carrier density and mobility for both electron- and hole-type carriers were determined. We observed a sudden increase in the hole density below ∼160 K, which is likely associated with the temperature-induced Lifshitz transition reported by a previous photoemission study. In addition, a more pronounced reduction in electron density occurs below 50 K, giving rise to comparable electron and hole densities at low temperature. Our observations indicate a possible electronic structure change below 50 K, which might be the direct driving force of the electron-hole “compensation” and the extremely large magnetoresistance as well. Numerical simulations imply that this material is unlikely to be a perfectly compensated system.

  14. Large Differences in Bacterial Community Composition among Three Nearby Extreme Waterbodies of the High Andean Plateau.

    Science.gov (United States)

    Aguilar, Pablo; Acosta, Eduardo; Dorador, Cristina; Sommaruga, Ruben

    2016-01-01

    The high Andean plateau or Altiplano contains different waterbodies that are subjected to extreme fluctuations in abiotic conditions on a daily and an annual scale. The bacterial diversity and community composition of those shallow waterbodies is largely unexplored, particularly, of the ponds embedded within the peatland landscape (i.e., Bofedales). Here we compare the small-scale spatial variability (Altiplano peatland ponds represent a hitherto unknown source of microbial diversity.

  15. Large-scale Agroecosytem's Resiliency to Extreme Hydrometeorological and Climate Extreme Events in the Missouri River Basin

    Science.gov (United States)

    Munoz-Arriola, F.; Smith, K.; Corzo, G.; Chacon, J.; Carrillo-Cruz, C.

    2015-12-01

    A major challenge for water, energy and food security relies on the capability of agroecosyststems and ecosystems to adapt to a changing climate and land use changes. The interdependency of these forcings, understood through our ability to monitor and model processes across scales, indicate the "depth" of their impact on agroecosystems and ecosystems, and consequently our ability to predict the system's ability to return to a "normal" state. We are particularly interested in explore two questions: (1) how hydrometeorological and climate extreme events (HCEs) affect sub-seasonal to interannual changes in evapotranspiration and soil moisture? And (2) how agroecosystems recover from the effect of such events. To address those questions we use the land surface hydrologic Variable Infiltration Capacity (VIC) model and the Moderate Resolution Imaging Spectrometer-Leaf Area Index (MODIS-LAI) over two time spans (1950-2013 using a seasonal fixed LAI cycle) and 2001-2013 (an 8-day MODIS-LAI). VIC is forced by daily/16th degree resolution precipitation, minimum and maximum temperature, and wind speed. In this large-scale experiment, resiliency is defined by the capacity of a particular agroecosystem, represented by a grid cell's ET, SM, and LAI to return to a historical average. This broad, yet simplistic definition will contribute to identify the possible components and their scales involved in agroecosystems and ecosystems capacity to adapt to the incidence of HCEs and technologies used to intensify agriculture and diversify their use for food and energy production. Preliminary results show that dynamical changes in land use, tracked by MODIS data, require larger time spans to address properly the influence of technologic improvements in crop production as well as the competition for land for biofuel vs. food production. On the other hand, fixed seasonal changes in land use allow us just to identify hydrologic changes mainly due to climate variability.

  16. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until recently th

  17. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until recently

  18. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until recently th

  19. GPU Implementation of Two-Dimensional Rayleigh-Benard Code with High Resolution and Extremely High Rayleigh Number

    Science.gov (United States)

    Gonzalez, C. M.; Sanchez, D. A.; Yuen, D. A.; Wright, G. B.; Barnett, G. A.

    2010-12-01

    model assumed convection was occurring only from heating below and that no other sources of heating were present, such as the radioactive decay of elements that would normally contribute to heating in the mantle. Our calculations attempted to push the potential computing power of the Tesla C2070 Fermi GPU to see how it would perform under the strain of a large grid size and an extremely large Rayleigh number. The array size of our model was 4500x2500 with a Rayleigh number of 5*10^10 and a Prandtl number of infinity. According to our estimates, each timestep for the 4500x2500 grid would take approximately 1 to 2 seconds per timestep. This calculation was based on the order of tenths of a microsecond per timestep grid point.

  20. European Extremely Large Telescope: some history, and the scientific community's preferences for wavelength

    Science.gov (United States)

    Gilmore, Gerard

    2008-04-01

    Extremely expensive new telescopes involve a compromise between the extreme ambitions of the scientific community, whose support justifies the financial costs, and the need to have a telescope design which can actually be built today at appropriate cost. In this article I provide a brief history of the process which built community support in Europe for what has become the European Extremely Large Telescope project (E-ELT). I then review remaining tensions between the community science case and day-one technical performance. While the range of very strong scientific cases which support the E-ELT project will largely be delivered, and lead to a quite outstanding scientific return, there are - as always! - demands for even more impressive performance. In addition to what the E-ELT will deliver, much of the community wants high spatial resolution at wavelengths shorter than one micron. Affordable adaptive optics systems will work best, initially at somewhat longer wavelengths. Planned performance enhancement during its operational life is very desirable in the E-ELT.

  1. Wintertime connections between extreme wind patterns in Spain and large-scale geopotential height field

    Science.gov (United States)

    Pascual, A.; Martín, M. L.; Valero, F.; Luna, M. Y.; Morata, A.

    2013-03-01

    The present study is focused on the study of the variability and the most significant wind speed patterns in Spain during the winter season analyzing as well connections between the wind speed field and the geopotential height at 1000 hPa over an Atlantic area. The daily wind speed variability is investigated by means of principal components using wind speed observations. Five main modes of variation, accounting 66% of the variance of the original data, have been identified, highlighting their differences in the Spanish wind speed behavior. Connections between the wind speeds and the large-scale atmospheric field were underlined by means of composite maps. Composite maps were built up to give an averaged atmospheric circulation associated with extreme wind speed variability in Spain. Moreover, the principal component analysis was also applied to the geopotential heights, providing relationships between the large-scale atmospheric modes and the observational local wind speeds. Such relationships are shown in terms of the cumulated frequency values of wind speed associated with the extreme scores of the obtained large-scale atmospheric modes, showing those large-scale atmospheric patterns more dominant in the wind field in Spain.

  2. Number of Black Children in Extreme Poverty Hits Record High. Analysis Background.

    Science.gov (United States)

    Children's Defense Fund, Washington, DC.

    To examine the experiences of black children and poverty, researchers conducted a computer analysis of data from the U.S. Census Bureau's Current Population Survey, the source of official government poverty statistics. The data are through 2001. Results indicated that nearly 1 million black children were living in extreme poverty, with after-tax…

  3. Extreme Learning Machines on High Dimensional and Large Data Applications: A Survey

    Directory of Open Access Journals (Sweden)

    Jiuwen Cao

    2015-01-01

    Full Text Available Extreme learning machine (ELM has been developed for single hidden layer feedforward neural networks (SLFNs. In ELM algorithm, the connections between the input layer and the hidden neurons are randomly assigned and remain unchanged during the learning process. The output connections are then tuned via minimizing the cost function through a linear system. The computational burden of ELM has been significantly reduced as the only cost is solving a linear system. The low computational complexity attracted a great deal of attention from the research community, especially for high dimensional and large data applications. This paper provides an up-to-date survey on the recent developments of ELM and its applications in high dimensional and large data. Comprehensive reviews on image processing, video processing, medical signal processing, and other popular large data applications with ELM are presented in the paper.

  4. Coupled large-eddy simulation and morphodynamics of a large-scale river under extreme flood conditions

    Science.gov (United States)

    Khosronejad, Ali; Sotiropoulos, Fotis; Stony Brook University Team

    2016-11-01

    We present a coupled flow and morphodynamic simulations of extreme flooding in 3 km long and 300 m wide reach of the Mississippi River in Minnesota, which includes three islands and hydraulic structures. We employ the large-eddy simulation (LES) and bed-morphodynamic modules of the VFS-Geophysics model to investigate the flow and bed evolution of the river during a 500 year flood. The coupling of the two modules is carried out via a fluid-structure interaction approach using a nested domain approach to enhance the resolution of bridge scour predictions. The geometrical data of the river, islands and structures are obtained from LiDAR, sub-aqueous sonar and in-situ surveying to construct a digital map of the river bathymetry. Our simulation results for the bed evolution of the river reveal complex sediment dynamics near the hydraulic structures. The numerically captured scour depth near some of the structures reach a maximum of about 10 m. The data-driven simulation strategy we present in this work exemplifies a practical simulation-based-engineering-approach to investigate the resilience of infrastructures to extreme flood events in intricate field-scale riverine systems. This work was funded by a Grant from Minnesota Dept. of Transportation.

  5. Large-scale and spatio-temporal extreme rain events over India: a hydrometeorological study

    Science.gov (United States)

    Ranade, Ashwini; Singh, Nityanand

    2014-02-01

    Frequency, intensity, areal extent (AE) and duration of rain spells during summer monsoon exhibit large intra-seasonal and inter-annual variations. Important features of the monsoon period large-scale wet spells over India have been documented. A main monsoon wet spell (MMWS) occurs over the country from 18 June to 16 September, during which, 26.5 % of the area receives rainfall 26.3 mm/day. Detailed characteristics of the MMWS period large-scale extreme rain events (EREs) and spatio-temporal EREs (ST-EREs), each concerning rainfall intensity (RI), AE and rainwater (RW), for 1 to 25 days have been studied using 1° gridded daily rainfall (1951-2007). In EREs, `same area' (grids) is continuously wet, whereas in ST-EREs, `any area' on the mean under wet condition for specified durations is considered. For the different extremes, second-degree polynomial gave excellent fit to increase in values from distribution of annual maximum RI and RW series with increase in duration. Fluctuations of RI, AE, RW and date of occurrence (or start) of the EREs and the ST-EREs did not show any significant trend. However, fluctuations of 1° latitude-longitude grid annual and spatial maximum rainfall showed highly significant increasing trend for 1 to 5 days, and unprecedented rains on 26-27 July 2005 over Mumbai could be a realization of this trend. The Asia-India monsoon intensity significantly influences the MMWS RW.

  6. Strong Gravitational Lensing by the Large R-Charged Non-Extremal Black Hole

    CERN Document Server

    Naji, J

    2016-01-01

    In this paper, gravitational lensing scenario due to the R-charged black hole of five dimensional supergravity investigated. We study the effective potential of traveling photons near the R-charged black hole and find some stable orbits for the photons. We also find that the effect of the black hole charges is increasing of the effective potential. We have shown that photons do not cross the horizon of the very large R-charged black hole. By using the numerical study we find that the black hole charges and non-extremality parameter decrease value of the deflection angle.

  7. Operational flood management under large-scale extreme conditions, using the example of the Middle Elbe

    Directory of Open Access Journals (Sweden)

    A. Kron

    2010-06-01

    Full Text Available In addition to precautionary or technical flood protection measures, short-term strategies of the operational management, i.e. the initiation and co-ordination of preventive measures during and/or before a flood event are crucially for the reduction of the flood damages. This applies especially for extreme flood events. These events are rare, but may cause a protection measure to be overtopped or even to fail and be destroyed. In such extreme cases, reliable decisions must be made and emergency measures need to be carried out to prevent even larger damages from occurring.

    Based on improved methods for meteorological and hydrological modelling a range of (physically based extreme flood scenarios can be derived from historical events by modification of air temperature and humidity, shifting of weather fields and recombination of flood relevant event characteristics. By coupling the large scale models with hydraulic and geotechnical models, the whole flood-process-chain can be analysed right down to the local scale. With the developed GIS-based tools for hydraulic modelling FlowGIS and the Dike-Information-System, (IS-dikes it is possible to quantify the endangering shortly before or even during a flood event, so the decision makers can evaluate possible options for action in operational mode.

  8. Deficits in Approximate Number System Acuity and Mathematical Abilities in 6.5-Year-Old Children Born Extremely Preterm

    Directory of Open Access Journals (Sweden)

    Melissa E. Libertus

    2017-07-01

    Full Text Available Preterm children are at increased risk for poor academic achievement, especially in math. In the present study, we examined whether preterm children differ from term-born children in their intuitive sense of number that relies on an unlearned, approximate number system (ANS and whether there is a link between preterm children’s ANS acuity and their math abilities. To this end, 6.5-year-old extremely preterm (i.e., <27 weeks gestation, n = 82 and term-born children (n = 89 completed a non-symbolic number comparison (ANS acuity task and a standardized math test. We found that extremely preterm children had significantly lower ANS acuity than term-born children and that these differences could not be fully explained by differences in verbal IQ, perceptual reasoning skills, working memory, or attention. Differences in ANS acuity persisted even when demands on visuo-spatial skills and attention were reduced in the ANS task. Finally, we found that ANS acuity and math ability are linked in extremely preterm children, similar to previous results from term-born children. These results suggest that deficits in the ANS may be at least partly responsible for the deficits in math abilities often observed in extremely preterm children.

  9. GSMT Education: Teaching about Adaptive Optics and Site Selection Using Extremely Large Telescopes

    Science.gov (United States)

    Sparks, R. T.; Pompea, S. M.

    2010-08-01

    Giant Segmented Mirror Telescopes (GSMT) represents the next generation of extremely large telescopes (ELT). Currently there are three active ELT projects, all established as international partnerships to build telescopes of greater than 20 meters aperture. Two of these have major participation by U.S. institutions: the Giant Magellan Telescope and the Thirty Meter Telescope. The ESO-ELT is under development by the European Southern Observatory and other European institutions. We have developed educational activities to accompany the design phase of these projects. The current activities focus on challenges faced in the design and site selection of a large telescope. The first module is on site selection. This online module is based on the successful Astronomy Village program model. Students evaluate several potential sites to decide where to build the GSMT. They must consider factors such as weather, light pollution, seeing, logistics, and geography. The second project has developed adaptive optics teaching units suitable for high school.

  10. Alternative design for extremely large telescopes and options to use the VATT for ELT design demonstration

    Science.gov (United States)

    Ackermann, Mark R.; McGraw, John T.; Zimmer, Peter C.

    2013-12-01

    A variety of optical designs for extremely large telescopes (ELTs) can be found throughout the technical literature. Most feature very fast primary mirrors of either conic or spherical figure. For those designs with conic primary mirrors, many of the optical approaches tend to be derivatives of either the aplanatic Cassegrain or Gregorian systems. The Cassegrain approach is more common as it results in a shorter optical system, but it requires a large convex aspheric secondary mirror, which is extremely difficult and expensive to test. The Gregorian approach is physically longer and suffers from greater field curvature. In some design variations, additional mirrors are added to reimage and possibly flatten a Cassegrain focus. An interesting alternative ELT design uses a small Cassegrain system to image the collimated output of a Gregorian-Mersenne concentrator. Another alternative approach, currently in favor for use on the European ELT, uses three powered mirrors and two flat mirrors to reimage a Cassegrain focus out the side similar to a Nasmyth system. A preliminary examination suggests that a small, fast primary mirror, such as that used on the VATT, might be used for a subscale prototype of current ELT optical design options.

  11. Development of an Evaluation Methodology for Loss of Large Area induced from extreme events

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sok Chul; Park, Jong Seuk; Kim, Byung Soon; Jang, Dong Ju; Lee, Seung Woo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    USNRC announced several regulatory requirements and guidance documents regarding the event of loss of large area including 10CFR 50.54(hh), Regulatory Guide 1.214 and SRP 19.4. In Korea, consideration of loss of large area has been limitedly taken into account for newly constructing NPPs as voluntary based. In general, it is hardly possible to find available information on methodology and key assumptions for the assessment of LOLA due to 'need to know based approach'. Urgent needs exists for developing country specific regulatory requirements, guidance and evaluation methodology by themselves with the consideration of their own geographical and nuclear safety and security environments. Currently, Korea Hydro and Nuclear Power Company (KHNP) has developed an Extended Damage Mitigation Guideline (EDMG) for APR1400 under contract with foreign consulting company. The submittal guidance NEI 06-12 related to B.5.b Phase 2 and 3 focused on unit-wise mitigation strategy instead of site level mitigation or response strategy. Phase 1 mitigating strategy and guideline for LOLA (Loss of Large Area) provides emphasis on site level arrangement including cooperative networking outside organizations and agile command and control system. Korea Institute of Nuclear Safety has carried out a pilot in-house research project to develop the methodology and guideline for evaluation of LOLA since 2014. This paper introduces the summary of major results and outcomes of the aforementioned research project. After Fukushima Dai-Ichi accident, the awareness on countering the event of loss of large area induced from extreme man-made hazards or extreme beyond design basis external event. Urgent need exists to develop regulatory guidance for coping with this undesirable situation, which has been out of consideration at existing nuclear safety regulatory framework due to the expectation of rare possibility of occurrence.

  12. Impacts of forced and unforced climate variability on extreme floods using a large climate ensemble

    Science.gov (United States)

    Martel, Jean-Luc; Brissette, François; Chen, Jie

    2016-04-01

    Frequency analysis has been widely used for the inference of flood magnitude and rainfall intensity required in engineering design. However, this inference is based on the concept of stationarity. How accurate is it when taking into account climate variability (i.e. both internal- and externally-forced variabilities)? Even in the absence of human-induced climate change, the short temporal horizon of the historical records renders this task extremely difficult to accomplish. To overcome this situation, large ensembles of simulations from a single climate model can be used to assess the impact of climate variability on precipitation and streamflow extremes. Thus, the objective of this project is to determine the reliability of return period estimates using the CanESM2 large ensemble. The spring flood annual maxima metric over snowmelt-dominated watersheds was selected to take into account the limits of global circulation models to properly simulate convective precipitation. The GR4J hydrological model coupled with the CemaNeige snow model was selected and calibrated using gridded observation datasets on snowmelt-dominated watersheds in Quebec, Canada. Using the hydrological model, streamflows were simulated using bias corrected precipitation and temperature data from the 50 members of CanESM2. Flood frequency analyses on the spring flood annual maxima were then computed using the Gumbel distribution with a 90% confidence interval. The 20-year return period estimates were then compared to assess the impact of natural climate variability over the 1971-2000 return period. To assess the impact of global warming, this methodology was then repeated for three time slices: reference period (1971-2000), near future (2036-2065) and far future (2071-2100). Over the reference period results indicate that the relative error between the return period estimates of two members can be up to 25%. Regarding the near future and far future periods, natural climate variability of extreme

  13. Large Scale Influences on Drought and Extreme Precipitation Events in the United States

    Science.gov (United States)

    Collow, A.; Bosilovich, M. G.; Koster, R. D.; Eichmann, A.

    2015-12-01

    Observations indicate that extreme weather events are increasing and it is likely that this trend will continue through the 21st century. However, there is uncertainty and disagreement in recent literature regarding the mechanisms by which extreme temperature and precipitation events are increasing, including the suggestion that enhanced Arctic warming has resulted in an increase in blocking events and a more meridional flow. A steady gradual increase in heavy precipitation events has been observed in the Midwestern and Northeastern United States, while the Southwestern United States, particularly California, has experienced suppressed precipitation and an increase in consecutive dry days over the past few years. The frequency, intensity, and duration of heavy precipitation events in the Midwestern United States and Northeastern United States, as well as drought in the Southwestern United States are examined using the Modern Era Retrospective Analysis for Research and Applications Version-2 (MERRA-2). Indices developed by the Expert Team on Climate Change Detection and Indices representing drought and heavy precipitation events have been calculated using the MERRA-2 dataset for the period of 1980 through 2014. Trends in these indices are analyzed and the indices are compared to large scale circulations and climate modes using a composite and statistical linkages approach. Statistically significant correlations are present in the summer months between heavy precipitation events and meridional flow despite the lack of enhanced Arctic warming, contradicting the suggested mechanisms. Weaker, though still significant, correlations are observed in the winter months when the Arctic is warming more rapidly than the Midlatitudes.

  14. High Precision Astrometry with MICADO at the European Extremely Large Telescope

    CERN Document Server

    Trippe, S; Eisenhauer, F; Förster-Schreiber, N M; Fritz, T K; Genzel, R

    2009-01-01

    In this article we identify and discuss various statistical and systematic effects influencing the astrometric accuracy achievable with MICADO, the near-infrared imaging camera proposed for the 42-metre European Extremely Large Telescope (E-ELT). These effects are instrumental (e.g. geometric distortion), atmospheric (e.g. chromatic differential refraction), and astronomical (reference source selection). We find that there are several phenomena having impact on ~100 micro-arcsec scales, meaning they can be substantially larger than the theoretical statistical astrometric accuracy of an optical/NIR 42m-telescope. Depending on type, these effects need to be controlled via dedicated instrumental design properties or via dedicated calibration procedures. We conclude that if this is done properly, astrometric accuracies of 40 micro-arcsec or better - with 40 micro-arcsec/year in proper motions corresponding to ~20 km/s at 100 kpc distance - can be achieved in one epoch of actual observations

  15. Enhanced azimuthal rotation of the large-scale flow through stochastic cessations in turbulent rotating convection with large Rossby numbers

    CERN Document Server

    Zhong, Jin-Qiang; Wang, Xue-ying

    2016-01-01

    We present measurements of the azimuthal orientation {\\theta}(t) and thermal amplitude {\\delta}(t) of the large-scale circulation (LSC) of turbulent rotating convection within an unprecedented large Rossby number range 170. We identify the mechanism through which the mean retrograde rotation speed can be enhanced by stochastic cessations in the presence of weak Coriolis force, and show that a low-dimensional, stochastic model provides predictions of the observed large-scale flow dynamics and interprets its retrograde rotation.

  16. Deficits in Approximate Number System Acuity and Mathematical Abilities in 6.5-Year-Old Children Born Extremely Preterm.

    Science.gov (United States)

    Libertus, Melissa E; Forsman, Lea; Adén, Ulrika; Hellgren, Kerstin

    2017-01-01

    Preterm children are at increased risk for poor academic achievement, especially in math. In the present study, we examined whether preterm children differ from term-born children in their intuitive sense of number that relies on an unlearned, approximate number system (ANS) and whether there is a link between preterm children's ANS acuity and their math abilities. To this end, 6.5-year-old extremely preterm (i.e., preterm children had significantly lower ANS acuity than term-born children and that these differences could not be fully explained by differences in verbal IQ, perceptual reasoning skills, working memory, or attention. Differences in ANS acuity persisted even when demands on visuo-spatial skills and attention were reduced in the ANS task. Finally, we found that ANS acuity and math ability are linked in extremely preterm children, similar to previous results from term-born children. These results suggest that deficits in the ANS may be at least partly responsible for the deficits in math abilities often observed in extremely preterm children.

  17. Trends in the number of extreme hot SST days along the Canary Upwelling System due to the influence of upwelling

    Directory of Open Access Journals (Sweden)

    Xurxo Costoya

    2014-07-01

    Full Text Available Trends in the number of extreme hot days (days with SST anomalies higher than the 95% percentile were analyzed along the Canary Upwelling Ecosystem (CUE over the period 1982- 2012 by means of SST data retrieved from NOAA OI1/4 Degree. The analysis will focus on the Atlantic Iberian sector and the Moroccan sub- region where upwelling is seasonal (spring and summer are permanent, respectively. Trends were analyzed both near coast and at the adjacent ocean where the increase in the number of extreme hot days is higher. Changes are clear at annual scale with an increment of 9.8±0.3 (9.7±0.1 days dec-1 near coast and 11.6±0.2 (13.5±0.1 days dec-1 at the ocean in the Atlantic Iberian sector (Moroccan sub-region. The differences between near shore and ocean trends are especially patent for the months under intense upwelling conditions. During that upwelling season the highest differences in the excess of extreme hot days between coastal and ocean locations (Δn(#days dec-1 occur at those regions where coastal upwelling increase is high. Actually, Δn and upwelling trends have shown to be significantly correlated in both areas, R=0.88 (p<0.01 at the Atlantic Iberian sector and R=0.67 (p<0.01 at the Moroccan sub-region.

  18. Experimental study of an optimised Pyramid wave-front sensor for Extremely Large Telescopes

    Science.gov (United States)

    Bond, Charlotte Z.; El Hadi, Kacem; Sauvage, Jean-François; Correia, Carlos; Fauvarque, Olivier; Rabaud, Didier; Lamb, Masen; Neichel, Benoit; Fusco, Thierry

    2016-07-01

    Over the last few years the Laboratoire d'Astrophysique de Marseille (LAM) has been heavily involved in R&D for adaptive optics systems dedicated to future large telescopes, particularly in preparation for the European Extremely Large Telescope (E-ELT). Within this framework an investigation into a Pyramid wave-front sensor is underway. The Pyramid sensor is at the cutting edge of high order, high precision wave-front sensing for ground based telescopes. Investigations have demonstrated the ability to achieve a greater sensitivity than the standard Shack-Hartmann wave-front sensor whilst the implementation of a Pyramid sensor on the Large Binocular Telescope (LBT) has provided compelling operational results.1, 2 The Pyramid now forms part of the baseline for several next generation Extremely Large Telescopes (ELTs). As such its behaviour under realistic operating conditions must be further understood in order to optimise performance. At LAM a detailed investigation into the performance of the Pyramid aims to fully characterise the behaviour of this wave-front sensor in terms of linearity, sensitivity and operation. We have implemented a Pyramid sensor using a high speed OCAM2 camera (with close to 0 readout noise and a frame rate of 1.5kHz) in order to study the performance of the Pyramid within a full closed loop adaptive optics system. This investigation involves tests on all fronts, from theoretical models and numerical simulations to experimental tests under controlled laboratory conditions, with an aim to fully understand the Pyramid sensor in both modulated and non-modulated configurations. We include results demonstrating the linearity of the Pyramid signals, compare measured interaction matrices with those derived in simulation and evaluate the performance in closed loop operation. The final goal is to provide an on sky comparison between the Pyramid and a Shack-Hartmann wave-front sensor, at Observatoire de la Côte d'Azur (ONERA-ODISSEE bench). Here we

  19. West Africa Extreme Rainfall Events and Large-Scale Ocean Surface and Atmospheric Conditions in the Tropical Atlantic

    Directory of Open Access Journals (Sweden)

    S. Ta

    2016-01-01

    Full Text Available Based on daily precipitation from the Global Precipitation Climatology Project (GPCP data during April–October of the 1997–2014 period, the daily extreme rainfall trends and variability over West Africa are characterized using 90th-percentile threshold at each grid point. The contribution of the extreme rainfall amount reaches ~50–90% in the northern region while it is ~30–50% in the south. The yearly cumulated extreme rainfall amount indicates significant and negative trends in the 6°N–12°N; 6°N–12°N; 17°W–10°W and 4°N–7°N; 4°N–7°N; 6°E–10°E 4°N–7°N; 6°E–10°E 4°N–7°N; 6°E–10°E domains, while the number of days exhibits nonsignificant trends over West Africa. The empirical orthogonal functions performed on the standardized anomalies show four variability modes that include all West Africa with a focus on the Sahelian region, the eastern region including the south of Nigeria, the western part including Guinea, Sierra Leone, Liberia, and Guinea-Bissau, and finally a small region at the coast of Ghana and Togo. These four modes are influenced differently by the large-scale ocean surface and atmospheric conditions in the tropical Atlantic. The results are applicable in planning the risks associated with these climate hazards, particularly on water resource management and civil defense.

  20. Performance comparison of image feature detectors utilizing a large number of scenes

    Science.gov (United States)

    Ferrarini, Bruno; Ehsan, Shoaib; Rehman, Naveed Ur; McDonald-Maier, Klaus D.

    2016-01-01

    The availability of a large number of local invariant feature detectors has rendered the task of evaluating them an important issue in vision research. However, the maximum number of scenes utilized for performance comparison has so far been relatively small. This paper presents an evaluation framework and results based on it utilizing a large number of scenes, providing insights into the performance of local feature detectors under varying JPEG compression ratio, blur, and uniform light changes.

  1. Extreme-ultraviolet collector mirror measurement using large reflectometer at NewSUBARU synchrotron facility

    Science.gov (United States)

    Iguchi, Haruki; Hashimoto, Hiraku; Kuki, Masaki; Harada, Tetsuo; Kinoshita, Hiroo; Watanabe, Takeo; Platonov, Yuriy Y.; Kriese, Michael D.; Rodriguez, Jim R.

    2016-06-01

    In extreme-ultraviolet (EUV) lithography, the development of high-power EUV sources is one of the critical issues. The EUV output power directly depends on the collector mirror performance. Furthermore, mirrors with large diameters are necessary to achieve high collecting performance and take sufficient distance to prevent heat and debris from a radiation point of the source. Thus collector mirror development with accurate reflectometer is important. We have developed a large reflectometer at BL-10 beamline of the NewSUBARU synchrotron facility that can be used for mirrors with diameters, thicknesses, and weights of up to 800 mm, 250 mm, and 50 kg, respectively. This reflectometer can measure reflectivity with fully s-polarized EUV light. In this study, we measured the reflectance of a 412-mm-diameter EUV collector mirror using a maximum incident angle of 36°. We obtained the peak reflectance, center wavelength and reflection bandwidth results and compared our results with Physikalisch-Technische Bundesanstalt results.

  2. Extremely large-scale simulation of a Kardar-Parisi-Zhang model using graphics cards.

    Science.gov (United States)

    Kelling, Jeffrey; Ódo, Géza

    2011-12-01

    The octahedron model introduced recently has been implemented onto graphics cards, which permits extremely large-scale simulations via binary lattice gases and bit-coded algorithms. We confirm scaling behavior belonging to the two-dimensional Kardar-Parisi-Zhang universality class and find a surface growth exponent: β = 0.2415(15) on 2(17) × 2(17) systems, ruling out β = 1/4 suggested by field theory. The maximum speedup with respect to a single CPU is 240. The steady state has been analyzed by finite-size scaling and a growth exponent α = 0.393(4) is found. Correction-to-scaling-exponent are computed and the power-spectrum density of the steady state is determined. We calculate the universal scaling functions and cumulants and show that the limit distribution can be obtained by the sizes considered. We provide numerical fitting for the small and large tail behavior of the steady-state scaling function of the interface width.

  3. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    Science.gov (United States)

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Entanglement and Extreme Spin Squeezing for a Fluctuating Number of Indistinguishable Particles

    CERN Document Server

    Hyllus, Philipp; Smerzi, Augusto; Toth, Geza

    2012-01-01

    We extend the criteria for $k$-particle entanglement from the spin squeezing parameter presented in [A.S. S{\\o}rensen and K. M{\\o}lmer, Phys. Rev. Lett. {\\bf 86}, 4431 (2001)] to systems with a fluctating number of particles. We also discuss how other spin squeezing inequalities can be generalized to this situation. Further, we give an operational meaning to the bounds for cases where the individual particles cannot be addressed. As a by-product, this allows us to show that in spin squeezing experiments with cold gases the particles are typically distinguishable in practise. Our results justify the application of the S{\\o}rensen-M{\\o}lmer bounds in recent experiments on spin squeezing in Bose-Einstein condensates.

  5. Forcings and Feedbacks on Convection in the 2010 Pakistan Flood: Modeling Extreme Precipitation with Interactive Large-Scale Ascent

    CERN Document Server

    Nie, Ji; Sobel, Adam H

    2016-01-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent and large latent heat release. The causal relationships between these factors are often not obvious, however, and the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here, we examine the large-scale forcings and convective heating feedback in the precipitation events which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with the large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation with input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic li...

  6. Relationship between climate extremes in Romania and their connection to large-scale air circulation

    Science.gov (United States)

    Barbu, Nicu; Ştefan, Sabina

    2015-04-01

    The aim of this paper is to investigate the connection between climate extremes (temperature and precipitation) in Romania and large-scale air circulation. Daily observational data of maximum air temperature and amount of precipitation for the period 1961-2010 were used to compute two seasonal indices associated with temperature and precipitation, quantifying their frequency, as follows: frequency of very warm days (FTmax90 ≥ 90th percentile), frequency of very wet days (FPp90; daily precipitation amount ≥ 90th percentile). Seasonally frequency of circulation types were calculated from daily circulation types determined by using two objective catalogues (GWT - GrossWetter-Typen and WLK - WetterLargenKlassifikation) from the COST733Action. Daily reanalysis data sets (sea level pressure, geopotential height at 925 and 500 hPa, u and v components of wind vector at 700 hPa and precipitable water content for the entire atmospheric column) build up by NCEP/NCAR, with 2.5°/2.5° lat/lon spatial resolution, were used to determine the circulation types. In order to select the optimal domain size related to the FTmax90 and the FPp90, the explained variance (EV) has been used. The EV determines the relation between the variance among circulation types and the total variance of the variable under consideration. This method quantifies the discriminatory power of a classification. The relationships between climate extremes in Romania and large-scale air circulation were investigated by using multiple linear regression model (MLRM), the predictands are FTmax90 and FPp90 and the circulation types were used as predictors. In order to select the independent predictors to build the MLRM the collinearity and multicollinearity analysis were performed. The study period is dividend in two periods: the period 1961-2000 is used to train the MLRM and the period 2001-2010 is used to validate the MLRM. The analytical relationship obtained by using MLRM can be used for future projection

  7. The 3D MHD code GOEMHD3 for large-Reynolds-number astrophysical plasmas

    CERN Document Server

    Skála, J; Büchner, J; Rampp, M

    2014-01-01

    The numerical simulation of turbulence and flows in almost ideal, large-Reynolds-number astrophysical plasmas motivates the implementation of almost conservative MHD computer codes. They should efficiently calculate, use highly parallelized schemes scaling well with large numbers of CPU cores, allows to obtain a high grid resolution over large simulation domains and which can easily be adapted to new computer architectures as well as to new initial and boundary conditions, allow modular extensions. The new massively parallel simulation code GOEMHD3 enables efficient and fast simulations of almost ideal, large-Reynolds-number astrophysical plasma flows, well resolved and on huge grids covering large domains. Its abilities are validated by major tests of ideal and weakly dissipative plasma phenomena. The high resolution ($2048^3$ grid points) simulation of a large part of the solar corona above an observed active region proved the excellent parallel scalability of the code using more than 30.000 processor cores...

  8. APERTURE: a precise extremely large reflective telescope using re-configurable elements

    Science.gov (United States)

    Ulmer, M. P.; Coverstone, V. L.; Cao, J.; Chung, Y.-W.; Corbineau, M.-C.; Case, A.; Murchison, B.; Lorenz, C.; Luo, G.; Pekosh, J.; Sepulveda, J.; Schneider, A.; Yan, X.; Ye, S.

    2016-07-01

    One of the pressing needs for the UV-Vis is a design to allow even larger mirrors than the JWST primary at an affordable cost. We report here the results of a NASA Innovative Advanced Concepts phase 1 study. Our project is called A Precise Extremely large Reflective Telescope Using Reconfigurable Elements (APERTURE). The idea is to deploy a continuous membrane-like mirror. The mirror figure will be corrected after deployment to bring it into better or equal lambda/20 deviations from the prescribed mirror shape. The basic concept is not new. What is new is to use a different approach from the classical piezoelectric-patch technology. Instead, our concept is based on a contiguous coating of a so called magnetic smart material (MSM). After deployment a magnetic write head will move on the non-reflecting side of the mirror and will generate a magnetic field that will produce a stress in the MSM that will correct the mirror deviations from the prescribed shape.

  9. Modified and double-clad large mode-area leakage channel fibers for extreme temperature conditions

    Science.gov (United States)

    Thavasi Raja, G.; Varshney, Shailendra K.

    2015-03-01

    Recently large-mode-area hybrid leakage channel fibers (HLCFs) were reported to overcome the limitation on mode area with single-mode (SM) operation for the practical bending radius of 7.5 cm at the preferred wavelength of 1064 nm. In this paper, we present the effects of a thermally induced refractive index change on the mode area of bend-compensated extremely LMA modified HLCFs (M-HLCFs) and double-clad M-HLCFs. A full-vectorial finite-element method-based modal solver is used to obtain the modal characteristics of M-HLCFs in various heat load conditions. Numerical simulations reveal that the effective mode area of M-HLCFs is ˜1433 μm2 at room temperature, which marginally decreases to ˜1387 μm2 while SM operation is maintained when the temperature distribution rises to ˜125 °C over the fiber geometry during high-power operations. We have also investigated a double-clad M-HLCF design exhibiting a mode area > ˜ 1000 μm2 for all heat load density variations up to a maximum of 12 × 109 W m-3, corresponding to a 250 °C temperature in the center of the fiber core region.

  10. Resistivity plateau and extremely large magnetoresistance in NbAs2 and TaAs2

    Science.gov (United States)

    Wang, Yi-Yan; Yu, Qiao-He; Guo, Peng-Jie; Liu, Kai; Xia, Tian-Long

    2016-07-01

    In topological insulators (TIs), metallic surface conductance saturates the insulating bulk resistance with decreasing temperature, resulting in resistivity plateau at low temperatures as a transport signature originating from metallic surface modes protected by time reversal symmetry (TRS). Such a characteristic has been found in several materials including Bi2Te2Se , SmB6 etc. Recently, similar behavior has been observed in metallic compound LaSb, accompanying an extremely large magnetoresistance (XMR). Shubnikov-de Hass (SdH) oscillation at low temperatures further confirms the metallic behavior of the plateau region under magnetic fields. LaSb [Tafti et al., Nat. Phys. 12, 272 (2015), 10.1038/nphys3581] has been proposed by the authors as a possible topological semimetal (TSM), while negative magnetoresistance is absent at this moment. Here, high quality single crystals of NbAs2/TaAs2 with inversion symmetry have been grown, and the resistivity under magnetic field is systematically investigated. Both of them exhibit metallic behavior under zero magnetic field, and a metal-to-insulator transition occurs when a nonzero magnetic field is applied, resulting in XMR (1.0 ×105% for NbAs2 and 7.3 ×105% for TaAs2 at 2.5 K and 14 T). With temperature decreased, a resistivity plateau emerges after the insulatorlike regime, and SdH oscillation has also been observed in NbAs2 and TaAs2.

  11. European Extremely Large Telescope Site Characterization II: High angular resolution parameters

    CERN Document Server

    Ramió, Héctor Vázquez; Muñoz-Tuñón, Casiana; Sarazin, Marc; Varela, Antonia M; Trinquet, Hervé; Delgado, José Miguel; Fuensalida, Jesús J; Reyes, Marcos; Benhida, Abdelmajid; Benkhaldoun, Zouhair; Lambas, Diego García; Hach, Youssef; Lazrek, M; Lombardi, Gianluca; Navarrete, Julio; Recabarren, Pablo; Renzi, Victor; Sabil, Mohammed; Vrech, Rubén

    2012-01-01

    This is the second article of a series devoted to European Extremely Large Telescope (E-ELT) site characterization. In this article we present the main properties of the parameters involved in high angular resolution observations from the data collected in the site testing campaign of the E-ELT during the Design Study (DS) phase. Observations were made in 2008 and 2009, in the four sites selected to shelter the future E-ELT (characterized under the ELT-DS contract): Aklim mountain in Morocco, Observatorio del Roque de los Muchachos (ORM) in Spain, Mac\\'on range in Argentina, and Cerro Ventarrones in Chile. The same techniques, instruments and acquisition procedures were taken on each site. A Multiple Aperture Scintillation Sensor (MASS) and a Differential Image Motion Monitor (DIMM) were installed at each site. Global statistics of the integrated seeing, the free atmosphere seeing, the boundary layer seeing and the isoplanatic angle were studied for each site, and the results are presented here. In order to e...

  12. European Extremely Large Telescope Site Characterization. II. High Angular Resolution Parameters

    Science.gov (United States)

    Vázquez Ramió, Héctor; Vernin, Jean; Muñoz-Tuñón, Casiana; Sarazin, Marc; Varela, Antonia M.; Trinquet, Hervé; Delgado, José Miguel; Fuensalida, Jesús J.; Reyes, Marcos; Benhida, Abdelmajid; Benkhaldoun, Zouhair; García Lambas, Diego; Hach, Youssef; Lazrek, M.; Lombardi, Gianluca; Navarrete, Julio; Recabarren, Pablo; Renzi, Victor; Sabil, Mohammed; Vrech, Rubén

    2012-08-01

    This is the second article of a series devoted to European Extremely Large Telescope (E-ELT) site characterization. In this article we present the main properties of the parameters involved in high angular resolution observations from the data collected in the site testing campaign of the E-ELT during the design study (DS) phase. Observations were made in 2008 and 2009, in the four sites selected to shelter the future E-ELT (characterized under the ELT-DS contract): Aklim mountain in Morocco, Observatorio del Roque de los Muchachos (ORM) in Spain, Macón range in Argentina, and Cerro Ventarrones in Chile. The same techniques, instruments, and acquisition procedures were taken on each site. A multiple aperture scintillation sensor (MASS) and a differential image motion monitor (DIMM) were installed at each site. Global statistics of the integrated seeing, the free atmosphere seeing, the boundary layer seeing, and the isoplanatic angle were studied for each site, and the results are presented here. In order to estimate other important parameters, such as the coherence time of the wavefront and the overall parameter “coherence étendue,” additional information of vertical profiles of the wind speed was needed. Data were retrieved from the National Oceanic and Atmospheric Administration (NOAA) archive. Ground wind speed was measured by automatic weather stations (AWS). More aspects of the turbulence parameters, such as their seasonal trend, their nightly evolution, and their temporal stability, were also obtained and analyzed.

  13. Tomographic control for wide field AO systems on extremely large telescopes

    Science.gov (United States)

    Petit, C.; Conan, J.-M.; Fusco, T.; Neichel, B.

    2010-07-01

    We investigate in this article tomographic control using both Laser and Natural Guide Stars (LGS and NGS) in the particular framework of the European Extremely Large Telescope (E-ELT) Wide Field Adaptive Optics (WFAO) modules design. A similar global control strategy has been indeed derived for both the Laser Tomographic Adaptive Optics (LTAO) and Multi-Conjugate Adaptive Optics (MCAO) modules of the E-ELT, due to similar constraints. This control strategy leads in both cases to a split control of low order modes measured thanks to NGS and high order modes measured thanks to LGS. We investigate here this split tomographic control, compared to an optimal coupled solution. To support our analysis, a dedicated simulation code has been developed. Indeed, due to the huge complexity of the EELT, fast simulation tools must be considered to explore quickly the tomographic issues. We describe our control strategy which has lead to considering split tomographic control. First results on Tomography for E-ELT WFAO systems are then presented and discussed.

  14. Double stage Lyot coronagraph with the apodized reticulated stop for extremely large telescope

    CERN Document Server

    Yaitskova, N

    2005-01-01

    One of the science drivers for the extremely large telescope (ELT) is imaging and spectroscopy of exo-solar planets located as close as 20mas to their parent star. The application requires a well thought-out design of the high contrast imaging instrumentation. Several working coronagraphic concepts have already been developed for the monolithic telescope with the diameter up to 8 meter. Nevertheless the conclusions made about the performance of these systems cannot be applied directly to the telescope of the diameter 30-100m. The existing schemes are needed to be reconsidered taking into account the specific characteristics of a segmented surface. We start this work with the classical system ? Lyot coronagraph. We show that while the increase in telescope diameter is an advantage for the high contrast range science, the segmentation sets a limit on the performance of the coronagraph. Diffraction from intersegment gaps sets a floor to the achievable extinction of the starlight. Masking out the bright segment g...

  15. Towards Precision Photometry with Extremely Large Telescopes: the Double Subgiant Branch of NGC 1851

    CERN Document Server

    Turri, P; Stetson, P B; Fiorentino, G; Andersen, D R; Véran, J -P; Bono, G

    2015-01-01

    The Extremely Large Telescopes currently under construction have a collecting area that is an order of magnitude larger than the present largest optical telescopes. For seeing-limited observations the performance will scale as the collecting area but, with the successful use of adaptive optics, for many applications it will scale as $D^4$ (where $D$ is the diameter of the primary mirror). Central to the success of the ELTs, therefore, is the successful use of multi-conjugate adaptive optics (MCAO) that applies a high degree correction over a field of view larger than the few arcseconds that limits classical adaptive optics systems. In this letter, we report on the analysis of crowded field images taken on the central region of the Galactic globular cluster NGC 1851 in $K_s$ band using GeMS at the Gemini South telescope, the only science-grade MCAO system in operation. We use this cluster as a benchmark to verify the ability to achieve precise near-infrared photometry by presenting the deepest $K_s$ photometry...

  16. MRI findings of the wrist in patients with multiple osteonecrosis in large joints of the extremities

    Energy Technology Data Exchange (ETDEWEB)

    Saitoh, Shinobu; Ebata, Tatsuki; Abe, Kazuhiro [Chiba Univ. (Japan). School of Medicine; Imai, Katsumi; Rokkaku, Tomoyuki

    1998-02-01

    We evaluated MRI findings of the wrist in patients who had multiple osteonecrosis in the large joints of their extremities (hips, knees, shoulders, and ankles) and compared these with the clinical symptoms and radiographical findings. Sixty wrists of 30 patients (3 males and 27 females) with multiple osteonecrosis were studied. Subjects ranged in age from 16 to 59 years. Their primary diseases were SLE in 24 patients, alcoholic osteonecrosis in two, Sjoegren`s syndrome in one, dermatomyositis in one, leukemia in one, and MCTD in one patient. Using MRI, we found osteonecrosis in seven wrists of four patients. Lesions were seen in six scaphoids of three patients, in two lunates of two patients, and in one capitate. We noted a reduced range of motion in three of the seven wrists with osteonecrosis. Two of the seven complained at wrist pain at motion, although three wrists were symptom free. Radiographically, an abnormality was recognized in two of the seven wrists. Generally, osteonecrosis of the lunate (Kienboeck`s disease) is more frequent than that of the scaphoid (Preiser`s disease). However in the present series, we found a higher osteonecrosis rate of the scaphoid than the lunate, using MRI. The discrepancy can be explained by the vascularity. In 1986, Gelberman reported that the scaphoid, the capitate, and 8% of the lunate had either vessels entering only one surface or large areas of bone that were dependent on a single vessel. The present study is consistent with these anatomical features. In other words, the present results demonstrated that Kienboeck`s disease can be induced not only by a deficient blood supply but also by some additional factors. (author)

  17. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large P\\'eclet number

    CERN Document Server

    Fuller, Nathaniel J

    2016-01-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we consider a two-dimensional advection-diffusion problem at small Reynolds number and large P\\'eclet number. We discuss the problem of mass transport for a circular cell in a uniform far-field flow. We approach the problem numerically, and also analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the cell demonstrates quantitative agreement between the numerical and analytical approaches.

  18. A strong law of large numbers for harmonizable isotropic random fields

    Directory of Open Access Journals (Sweden)

    Randall J. Swift

    1997-01-01

    Full Text Available The class of harmonizable fields is a natural extension of the class of stationary fields. This paper considers a strong law of large numbers for the spherical average of a harmonizable isotropic random field.

  19. On the Strong Law of Large Numbers for Non-Independent B-Valued Random Variables

    Institute of Scientific and Technical Information of China (English)

    Gan Shi-xin

    2004-01-01

    This paper investigates some conditions which imply the strong laws of large numbers for Banach space valued random variable sequences. Some generalizations of the Marcinkiewicz-Zygmund theorem and the Hoffmann-Jφrgensen and Pisier theorem are obtained.

  20. The three-large-primes variant of the number field sieve

    OpenAIRE

    Cavallar, S.H.

    2002-01-01

    The Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of this method (but fortunately,also the easiest to parallelise).Pollard's original method allowed one large prime. After that thetwo-large-primes variant led to substantial improvements.In this paperwe investigate ...

  1. How Extremely Large Telescopes (ELTs) will Acquire the First Spectra of Rocky Habitable Planets

    Science.gov (United States)

    Guyon, Olivier; Martinache, F.

    2013-01-01

    ELTs will offer angular resolution around 10mas in the near-IR and unprecedented sensitivity. While direct imaging of Earth-like exoplanets around Sun-like stars will stay out of reach of ELTs, habitable planets around nearby M-type main sequence stars can be directly imaged with a system optimized for small inner working angle high contrast imaging. For about 300 nearby M dwarfs, the angular separation at maximum elongation is at or beyond 1 lambda/D in the near-IR for an ELT. The planet to star reflected light contrast is 1e-7 to 1e-8, similar to what the upcoming generation of Extreme-AO systems will achieve on 8-m telescopes, and the potential planets are sufficiently bright for near-IR spectroscopy. We show that this scientific goal is enabled by two major technologies: (1) Newly developed high efficiency coronagraphs that are compatible with segmented/sparse ELT pupils. We will describe the PIAACMC coronagraph as an example. It can deliver full starlight rejection, 100% throughput and sub-lambda/D IWA for the E-ELT, GMT and TMT pupils (2) Wavefront sensing techniques making full use of spatial coherence across the pupil, this offering several order of magnitudes gain over conventional systems. We conclude that large ground-based telescopes will be able acquire the first high quality spectra of habitable planets orbiting M-type stars, while future space mission(s) will later target habitable planets around F-G-K type stars.

  2. Industrial Process Design for Manufacturing Inconel 718 Extremely Large Forged Rings

    Science.gov (United States)

    Ambielli, John F.

    2011-12-01

    Inconel 718 is a Ni-Fe-based superalloy that has been central to the gas turbine industry since its discovery in 1963. While much more difficult to process than carbon or stainless steels, among its superalloy peers Inconel 718 has relatively high forgeability and has been used to make discs, rings, shells, and structural components. A metal forming process design algorithm is presented to incorporate key criteria relevant to superalloy processing. This algorithm was applied to conceptual forging and heat treating extremely large rings of Inconel 718 of diameter 1956 mm (77in) and weight 3252 kg (7155 lb). A 3-stage standard thermomechanical (TMP) processing was used, where Stage 1 strain varied from 0.1190 to 0.2941, Stage 2 from 0.0208 to 0.0357 and Stage 3 from 0.0440 to 0.0940. This was followed by heat treatment of a solution anneal (954°C/1750°F for 4 hour hold), air cool, then a double aging (718°C/1325°F for 8 hour hold; furnace cool to 621°C/1150°F 56°C/100°F per hr; 18 hour total time for both steps). Preliminary mechanical testing was performed. Average yield strength of 951 MPa/138 ksi (longitudinal) and 979 MPa/142 ksi (axial) was achieved. Tensile strengths were 1276 MPa/185 ksi (longitudinal) and 1255 MPa/182 ksi (axial). Elongations and reduction of areas attained were, respectively, 18 (long) and 25 (axial) and 28 (long) and 27 (axial).

  3. Thermocapillary migration of a droplet with a thermal source at large Reynolds and Marangoni numbers

    CERN Document Server

    Wu, Zuo-Bing

    2014-01-01

    The {\\it unsteady} process for thermocapillary droplet migration at large Reynolds and Marangoni numbers has been previously reported by identifying a nonconservative integral thermal flux across the surface in the {\\it steady} thermocapillary droplet migration, [Wu and Hu, J. Math. Phys. {\\bf 54} 023102, (2013)]. Here we add a thermal source in the droplet to preserve the integral thermal flux across the surface as conservative, so that thermocapillary droplet migration at large Reynolds and Marangoni numbers can reach a {\\it quasi-steady} process. Under assumptions of {\\it quasi-steady} state and non-deformation of the droplet, we make an analytical result for the {\\it steady} thermocapillary migration of droplet with the thermal source at large Reynolds and Marangoni numbers. The result shows that the thermocapillary droplet migration speed slowly increases with the increase of Marangoni number.

  4. Large-scale drivers of local precipitation extremes in convection-permitting climate simulations

    Science.gov (United States)

    Chan, Steven C.; Kendon, Elizabeth J.; Roberts, Nigel M.; Fowler, Hayley J.; Blenkinsop, Stephen

    2016-04-01

    The Met Office 1.5-km UKV convective-permitting models (CPM) is used to downscale present-climate and RCP8.5 60-km HadGEM3 GCM simulations. Extreme UK hourly precipitation intensities increase with local near-surface temperatures and humidity; for temperature, the simulated increase rate for the present-climate simulation is about 6.5% K**-1, which is consistent with observations and theoretical expectations. While extreme intensities are higher in the RCP8.5 simulation as higher temperatures are sampled, there is a decline at the highest temperatures due to circulation and relative humidity changes. Extending the analysis to the broader synoptic scale, it is found that circulation patterns, as diagnosed by MSLP or circulation type, play an increased role in the probability of extreme precipitation in the RCP8.5 simulation. Nevertheless for both CPM simulations, vertical instability is the principal driver for extreme precipitation.

  5. Solar concentration properties of flat fresnel lenses with large F-numbers

    Science.gov (United States)

    Cosby, R. M.

    1978-01-01

    The solar concentration performances of flat, line-focusing sun-tracking Fresnel lenses with selected f-numbers between 0.9 and 2.0 were analyzed. Lens transmittance was found to have a weak dependence on f-number, with a 2% increase occuring as the f-number is increased from 0.9 to 2.0. The geometric concentration ratio for perfectly tracking lenses peaked for an f-number near 1.35. Intensity profiles were more uniform over the image extent for large f-number lenses when compared to the f/0.9 lens results. Substantial decreases in geometri concentration ratios were observed for transverse tracking errors equal to or below 1 degree for all f-number lenses. With respect to tracking errors, the solar performance is optimum for f-numbers between 1.25 and 1.5.

  6. A Projection of Changes in Landfilling Atmospheric River Frequency and Extreme Precipitation over Western North America from the Large Ensemble CESM Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hagos, Samson M.; Leung, Lai-Yung R.; Yoon, Jin-Ho; Lu, Jian; Gao, Yang

    2016-02-06

    Simulations from the Community Earth System Model Large Ensemble project are analyzed to investigate the impact of global warming on atmospheric rivers (ARs). The model has notable biases in simulating the subtropical jet position and the relationship between extreme precipitation and moisture transport. After accounting for these biases, the model projects an ensemble mean increase of 35% in the number of landfalling AR days between the last twenty years of the 20th and 21st centuries. However, the number of AR associated extreme precipitation days increases only by 28% because the moisture transport required to produce extreme precipitation also increases with warming. Internal variability introduces an uncertainty of ±8% and ±7% in the projected changes in AR days and associated extreme precipitation days. In contrast, accountings for model biases only change the projected changes by about 1%. The significantly larger mean changes compared to internal variability and to the effects of model biases highlight the robustness of AR responses to global warming.

  7. Large Eddy Simulations of Kelvin Helmholtz instabilities at high Reynolds number stratified flows

    Science.gov (United States)

    Brown, Dana; Goodman, Lou; Raessi, Mehdi

    2015-11-01

    Simulations of Kelvin Helmholtz Instabilities (KHI) at high Reynolds numbers are performed using the Large Eddy Simulation technique. Reynolds numbers up to 100,000 are achieved using our model. The resulting data set is used to examine the effect of Reynolds number on various statistics, including dissipation flux coefficient, turbulent kinetic energy budget, and Thorpe length scale. It is shown that KHI are qualitatively different at high Re, up to and including the onset of vortex pairing and billow collapse and quantitatively different afterward. The effect of Richardson number is also examined. The results are discussed as they apply to ocean experiments.

  8. Science case and requirements for the MOSAIC concept for a multi-object spectrograph for the European Extremely Large Telescope

    NARCIS (Netherlands)

    Evans, C. J.; Puech, M.; Barbuy, B.; Bonifacio, P.; Cuby, J.-G.; Guenther, E.; Hammer, F.; Jagourel, P.; Kaper, L.; Morris, S. L.; Afonso, J.; Amram, P.; Aussel, H.; Basden, A.; Bastian, N.; Battaglia, G.; Biller, B.; Bouché, N.; Caffau, E.; Charlot, S.; Clénet, Y.; Combes, F.; Conselice, C.; Contini, T.; Dalton, G.; Davies, B.; Disseau, K.; Dunlop, J.; Fiore, F.; Flores, H.; Fusco, T.; Gadotti, D.; Gallazzi, A.; Giallongo, E.; Gonçalves, T.; Gratadour, D.; Hill, V.; Huertas-Company, M.; Ibata, R.; Larsen, S.; Le Fèvre, O.; Lemasle, B.; Maraston, C.; Mei, S.; Mellier, Y.; Östlin, G.; Paumard, T.; Pello, R.; Pentericci, L.; Petitjean, P.; Roth, M.; Rouan, D.; Schaerer, D.; Telles, E.; Trager, S.; Welikala, N.; Zibetti, S.; Ziegler, B.

    2014-01-01

    Over the past 18 months we have revisited the science requirements for a multi-object spectrograph (MOS) for the European Extremely Large Telescope (E-ELT). These efforts span the full range of E-ELT science and include input from a broad cross-section of astronomers across the ESO partner countries

  9. Identification of genetic variants associated with maize flowering time using an extremely large multi-genetic background population

    Science.gov (United States)

    Flowering time is one of the major adaptive traits in domestication of maize and an important selection criterion in breeding. To detect more maize flowering time variants we evaluated flowering time traits using an extremely large multi- genetic background population that contained more than 8000 l...

  10. Monte-Carlo modelling of multi-object adaptive optics performance on the European Extremely Large Telescope

    Science.gov (United States)

    Basden, A. G.; Morris, T. J.

    2016-09-01

    The performance of a wide-field adaptive optics system depends on input design parameters. Here we investigate the performance of a multi-object adaptive optics system design for the European Extremely Large Telescope, using an end-to-end Monte-Carlo adaptive optics simulation tool, DASP, with relevance for proposed instruments such as MOSAIC. We consider parameters such as the number of laser guide stars, sodium layer depth, wavefront sensor pixel scale, actuator pitch and natural guide star availability. We provide potential areas where costs savings can be made, and investigate trade-offs between performance and cost, and provide solutions that would enable such an instrument to be built with currently available technology. Our key recommendations include a trade-off for laser guide star wavefront sensor pixel scale of about 0.7 arcseconds per pixel, and a field of view of at least 7 arcseconds, that EMCCD technology should be used for natural guide star wavefront sensors even if reduced frame rate is necessary, and that sky coverage can be improved by a slight reduction in natural guide star sub-aperture count without significantly affecting tomographic performance. We find that adaptive optics correction can be maintained across a wide field of view, up to 7 arcminutes in diameter. We also recommend the use of at least 4 laser guide stars, and include ground-layer and multi-object adaptive optics performance estimates.

  11. Monte-Carlo modelling of multi-object adaptive optics performance on the European Extremely Large Telescope

    CERN Document Server

    Basden, Alastair

    2016-01-01

    The performance of a wide-field adaptive optics system depends on input design parameters. Here we investigate the performance of a multi-object adaptive optics system design for the European Extremely Large Telescope, using an end-to-end Monte-Carlo adaptive optics simulation tool, DASP, with relevance for proposed instruments such as MOSAIC. We consider parameters such as the number of laser guide stars, sodium layer depth, wavefront sensor pixel scale, actuator pitch and natural guide star availability. We provide potential areas where costs savings can be made, and investigate trade-offs between performance and cost, and provide solutions that would enable such an instrument to be built with currently available technology. Our key recommendations include a trade-off for laser guide star wavefront sensor pixel scale of about 0.7 arcseconds per pixel, and a field of view of at least 7 arcseconds, that EMCCD technology should be used for natural guide star wavefront sensors even if reduced frame rate is nece...

  12. A highly efficient multi-core algorithm for clustering extremely large datasets

    Directory of Open Access Journals (Sweden)

    Kraus Johann M

    2010-04-01

    Full Text Available Abstract Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

  13. SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2016-01-01

    This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...... schemes where the number of players can be as large as $(q-1)^2-1$, we determine bounds for the reconstruction and privacy thresholds and conditions for strong multiplication using the cohomology and the intersection theory on toric surfaces.......This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized...

  14. Computational performance comparison of wavefront reconstruction algorithms for the European Extremely Large Telescope on multi-CPU architecture.

    Science.gov (United States)

    Feng, Lu; Fedrigo, Enrico; Béchet, Clémentine; Brunner, Elisabeth; Pirani, Werther

    2012-06-01

    The European Southern Observatory (ESO) is studying the next generation giant telescope, called the European Extremely Large Telescope (E-ELT). With a 42 m diameter primary mirror, it is a significant step from currently existing telescopes. Therefore, the E-ELT with its instruments poses new challenges in terms of cost and computational complexity for the control system, including its adaptive optics (AO). Since the conventional matrix-vector multiplication (MVM) method successfully used so far for AO wavefront reconstruction cannot be efficiently scaled to the size of the AO systems on the E-ELT, faster algorithms are needed. Among those recently developed wavefront reconstruction algorithms, three are studied in this paper from the point of view of design, implementation, and absolute speed on three multicore multi-CPU platforms. We focus on a single-conjugate AO system for the E-ELT. The algorithms are the MVM, the Fourier transform reconstructor (FTR), and the fractal iterative method (FRiM). This study enhances the scaling of these algorithms with an increasing number of CPUs involved in the computation. We discuss implementation strategies, depending on various CPU architecture constraints, and we present the first quantitative execution times so far at the E-ELT scale. MVM suffers from a large computational burden, making the current computing platform undersized to reach timings short enough for AO wavefront reconstruction. In our study, the FTR provides currently the fastest reconstruction. FRiM is a recently developed algorithm, and several strategies are investigated and presented here in order to implement it for real-time AO wavefront reconstruction, and to optimize its execution time. The difficulty to parallelize the algorithm in such architecture is enhanced. We also show that FRiM can provide interesting scalability using a sparse matrix approach.

  15. Screening large numbers of recombinant plasmids: modifications and additions to alkaline lysis for greater efficiency

    Institute of Scientific and Technical Information of China (English)

    XU Yibing; N.V. CHANDRASEKHARAN; Daniel L. SIMMONS

    2006-01-01

    Selecting bacteria transformed with recombinant plasmid is a laborious step in gene cloning experiments. This selection process is even more tedious when large numbers of clones need to be screened. We describe here modifications to the ultra fast plasmid preparation method described previously by Law and Crickmore. The modified method is coupled to an efficient PCR step to rapidly determine orientation of the inserts. Compared to traditional methods of analysis requiring growth of overnight cultures, plasmid isolation and restriction enzyme digestion to determine orientation this procedure allows for the analysis and storage of a large number of recombinants within a few hours.

  16. Paediatric extremity vascular injuries - experience from a large urban trauma centre in India.

    Science.gov (United States)

    Jaipuria, Jiten; Sagar, Sushma; Singhal, Maneesh; Bagdia, Amit; Gupta, Amit; Kumar, Subodh; Mishra, Biplab

    2014-01-01

    Paediatric extremity vascular injuries are infrequent, and management protocols draw significantly from adult vascular trauma experience necessitating a continuous review of evidence. A retrospective registry review of all consecutive patients younger than 18 years age treated for extremity vascular trauma from 2007 to 2012 was carried out. Diagnostic algorithm relied little on measurement of pressure indices. Data was collected about demographics, time since injury, pattern of injury, ISS, initial GCS and presence of shock, results of diagnostic modality and treatment given with associated complications. Patients completing 2 years follow up were assessed for functional disability and vascular patency. A multivariable regression model was used to evaluate effects of - ISS, presence of orthopaedic injury, soft tissue injury, neural injury and arterial patency at the end of 2 years - on outcome of functional disability. Paediatric extremity vascular injuries accounted for 0.68% hospital admissions with a median delay of 8h from injury. 82 patients were included with 50 cases examined for long term outcome. Patient cohort was overwhelmingly male, with 'fall', 'road traffic injury' and 'glass cut' being most common injury mechanisms. CT angiography and duplex scan based diagnostic algorithm performed satisfactorily further identifying missed injuries and aiding complex orthopaedic reconstruction. Brachial and femoral vessels were most commonly injured. Lower extremity vascular injury was found associated with significantly higher ISS and requirement for fasciotomy. Upper extremity vascular injury was associated with higher odds of neural injury. Younger children were at higher risk of combined radial and ulnar vessel injury. No patient satisfactorily complied with post-operative anticoagulant/antithrombotic prophylaxis. 28 patients had good functional outcome with unsatisfactory functional outcome found associated with significantly higher ISS, presence of orthopaedic

  17. Methods to produce and safely work with large numbers of Toxoplasma gondii oocysts and bradyzoite cysts

    OpenAIRE

    H. Fritz; Barr, B.; Packham, A.; Melli, A.; Conrad, P A

    2011-01-01

    Two major obstacles to conducting studies with Toxoplasma gondii oocysts are the difficulty in reliably producing large numbers of this life stage and safety concerns because the oocyst is the most environmentally resistant stage of this zoonotic organism. Oocyst production requires oral infection of the definitive feline host with adequate numbers of T. gondii organisms to obtain unsporulated oocysts that are shed in the feces for 3-10 days after infection. Since the most successful and comm...

  18. Terminal states of thermocapillary migration of a planar droplet at moderate and large Marangoni numbers

    OpenAIRE

    Wu, Zuo-Bing

    2017-01-01

    In this paper, thermocapillary migration of a planar droplet at moderate and large Marangoni numbers is investigated analytically and numerically. By using the dimension-analysis method, the thermal diffusion time scale is determined as the controlling one of the thermocapillary droplet migration system. During this time, the whole thermocapillary migration process is fully developed. By using the front-tracking method, the steady/unsteady states as the terminal ones at moderate/large Marango...

  19. Structure of Wall-Eddies at Very Large Reynolds Number--A Large-Scale PIV Study

    Science.gov (United States)

    Hommema, S. E.; Adrian, R. J.

    2000-11-01

    The results of an experiment performed in the first 5 m of the neutral atmospheric boundary layer are presented. Large-scale PIV measurements (up to 2 m × 2 m field-of-view) were obtained in the streamwise / wall-normal plane of a very-large Reynolds number (Re_θ > 10^6, based on momentum thickness and freestream velocity), flat-plate, zero-pressure-gradient boundary layer. Measurements were obtained at the SLTEST facility in the U.S. Army's Dugway Proving Grounds. Coherent packets of ramp-like structures with downstream inclination are observed and show a remarkable resemblance to those observed in typical laboratory-scale experiments at far lower Reynolds number. The results are interpreted in terms of a vortex packet paradigm(Adrian, R.J., C.D. Meinhart, and C.D. Tomkins, Vortex organization in the outer region of the turbulent boundary layer, to appear in J. Fluid Mech., 2000.) and begin to extend the model to high Reynolds numbers of technological importance. Additional results obtained during periods of non-neutral atmospheric stability are contrasted with those of the canonical neutral boundary layer. Sample smoke visualization images (3 m × 15 m field-of-view) are available online from the author.

  20. Eulerian models for particle trajectory crossing in turbulent flows over a large range of Stokes numbers

    Science.gov (United States)

    Fox, Rodney O.; Vie, Aymeric; Laurent, Frederique; Chalons, Christophe; Massot, Marc

    2012-11-01

    Numerous applications involve a disperse phase carried by a gaseous flow. To simulate such flows, one can resort to a number density function (NDF) governed a kinetic equation. Traditionally, Lagrangian Monte-Carlo methods are used to solve for the NDF, but are expensive as the number of numerical particles needed must be large to control statistical errors. Moreover, such methods are not well adapted to high-performance computing because of the intrinsic inhomogeneity of the NDF. To overcome these issues, Eulerian methods can be used to solve for the moments of the NDF resulting in an unclosed Eulerian system of hyperbolic conservation laws. To obtain closure, in this work a multivariate bi-Gaussian quadrature is used, which can account for particle trajectory crossing (PTC) over a large range of Stokes numbers. This closure uses up to four quadrature points in 2-D velocity phase space to capture large-scale PTC, and an anisotropic Gaussian distribution around each quadrature point to model small-scale PTC. Simulations of 2-D particle-laden isotropic turbulence at different Stokes numbers are employed to validate the Eulerian models against results from the Lagrangian approach. Good agreement is found for the number density fields over the entire range of Stokes numbers tested. Research carried out at the Center for Turbulence Research 2012 Summer Program.

  1. Coupled coagulation of aerosol particles at large Knudsen and small Péclet numbers

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    At large Knudsen number, the medium can be approximated by a molecular system, and the equa tion for the pair-distribution function is then established. When the Péclet number is small, the matched asymptotic ex pansions of singular perturbation theory is used to solve the equation of the pair-distribution function. A third-order ex pansion for the dimensionless coagulation rate (Nusselt number) is thus obtained. Comparison of numeric results of the coagulation rate in a molecular system and that in a continuous medium has shown that the coagulation rate in a molecu lar system is much larger than that in a continuous medium.

  2. John Cage's Number Pieces as Stochastic Processes: a Large-Scale Analysis

    CERN Document Server

    Popoff, Alexandre

    2013-01-01

    The Number Pieces are a corpus of works by composer John Cage, which rely on a particular time-structure used for determining the temporal location of sounds, named the "time-bracket". The time-bracket system is an inherently stochastic process, which complicates the analysis of the Number Pieces as it leads to a large number of possibilities in terms of sonic content instead of one particular fixed performance. The purpose of this paper is to propose a statistical approach of the Number Pieces by assimilating them to stochastic processes. Two Number Pieces, "Four" and "Five", are studied here in terms of pitch-class set content: the stochastic processes at hand lead to a collection of random variables indexed over time giving the distribution of the possible pitch-class sets. This approach allows for a static and dynamic analysis of the score encompassing all the possible outcomes during the performance of these works.

  3. Dispersion in the large-deviation regime. Part II: cellular flow at large P\\'eclet number

    CERN Document Server

    Haynes, P H

    2014-01-01

    A standard model for the study of scalar dispersion through advection and molecular diffusion is a two-dimensional periodic flow with closed streamlines inside periodic cells. Over long time scales, the dispersion of a scalar in this flow can be characterised by an effective diffusivity that is a factor $\\mathrm{Pe}^{1/2}$ larger than molecular diffusivity when the P\\'eclet number $\\mathrm{Pe}$ is large. Here we provide a more complete description of dispersion in this regime by applying the large-deviation theory developed in Part I of this paper. We derive approximations to the rate function governing the scalar concentration at large time $t$ by carrying out an asymptotic analysis of the relevant family of eigenvalue problems. We identify two asymptotic regimes and make predictions for the rate function and spatial structure of the scalar. Regime I applies to distances from the release point that satisfy $|\\boldsymbol{x}| = O(\\mathrm{Pe}^{1/4} t)$ . The concentration in this regime is isotropic at large sc...

  4. Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2011-01-01

    Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.

  5. Laws of Large Numbers of Negatively Correlated Random Variables for Capacities

    Institute of Scientific and Technical Information of China (English)

    Wen-juan LI; Zeng-jing CHEN

    2011-01-01

    Our aim is to present some limit theorems for capacities.We consider a sequence of pairwise negatively correlated random variables.We obtain laws of large numbers for upper probabilities and 2-aiternating capacities,using some results in the classical probability theory and a non-additive version of Chebyshev's inequality and Borai-Contelli lemma for capacities.

  6. STRONG LAW OF LARGE NUMBERS AND GROWTH RATE FOR NOD SEQUENCES

    Institute of Scientific and Technical Information of China (English)

    MA Song-lin; WANG Xue-jun

    2015-01-01

    In the paper, we get the precise results of Hájek-Rényi type inequalities for the par-tial sums of negatively orthant dependent sequences, which improve the results of Theorem 3.1 and Corollary 3.2 in Kim (2006) and the strong law of large numbers and strong growth rate for negatively orthant dependent sequences.

  7. Gravitational effect of centre mass with electric charge and a large number of magnetic monopoles

    Institute of Scientific and Technical Information of China (English)

    Gong Tian-Xi; Li Ai-Gen; Wang Yong-Jiu

    2005-01-01

    In this paper, using an elegant mathematical method advanced by us, we calculate the orbital effect in the gravitational field of the centre mass with electric charge and a large number of magnetic monopoles. Generalizing the effect in the Schwarzschild field, we obtain interesting results by discussing the parameters of the celestial body that provide a feasible experimental verification of the general relativity.

  8. The three-large-primes variant of the number field sieve

    NARCIS (Netherlands)

    Cavallar, S.H.

    2002-01-01

    The Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of this method (b

  9. Weak laws of large numbers for arrays of rowwise negatively dependent random variables

    Directory of Open Access Journals (Sweden)

    R. L. Taylor

    2001-01-01

    Full Text Available Weak laws of large numbers for arrays of rowwise negatively dependent random variables are obtained in this paper. The more general hypothesis of negative dependence relaxes the usual assumption of independence. The moment conditions are similar to previous results, and the stochastic bounded condition also provides a generalization of the usual distributional assumptions.

  10. Unusually Large Number of Mutations in Asexually Reproducing Clonal Planarian Dugesia japonica.

    Directory of Open Access Journals (Sweden)

    Osamu Nishimura

    Full Text Available We established a laboratory clonal strain of freshwater planarian (Dugesia japonica that was derived from a single individual and that continued to undergo autotomous asexual reproduction for more than 20 years, and we performed large-scale genome sequencing and transcriptome analysis on it. Despite the fact that a completely clonal strain of the planarian was used, an unusually large number of mutations were detected. To enable quantitative genetic analysis of such a unique organism, we developed a new model called the Reference Gene Model, and used it to conduct large-scale transcriptome analysis. The results revealed large numbers of mutations not only outside but also inside gene-coding regions. Non-synonymous SNPs were detected in 74% of the genes for which valid ORFs were predicted. Interestingly, the high-mutation genes, such as metabolism- and defense-related genes, were correlated with genes that were previously identified as diverse genes among different planarian species. Although a large number of amino acid substitutions were apparently accumulated during asexual reproduction over this long period of time, the planarian maintained normal body-shape, behaviors, and physiological functions. The results of the present study reveal a unique aspect of asexual reproduction.

  11. Unusually Large Number of Mutations in Asexually Reproducing Clonal Planarian Dugesia japonica.

    Science.gov (United States)

    Nishimura, Osamu; Hosoda, Kazutaka; Kawaguchi, Eri; Yazawa, Shigenobu; Hayashi, Tetsutaro; Inoue, Takeshi; Umesono, Yoshihiko; Agata, Kiyokazu

    2015-01-01

    We established a laboratory clonal strain of freshwater planarian (Dugesia japonica) that was derived from a single individual and that continued to undergo autotomous asexual reproduction for more than 20 years, and we performed large-scale genome sequencing and transcriptome analysis on it. Despite the fact that a completely clonal strain of the planarian was used, an unusually large number of mutations were detected. To enable quantitative genetic analysis of such a unique organism, we developed a new model called the Reference Gene Model, and used it to conduct large-scale transcriptome analysis. The results revealed large numbers of mutations not only outside but also inside gene-coding regions. Non-synonymous SNPs were detected in 74% of the genes for which valid ORFs were predicted. Interestingly, the high-mutation genes, such as metabolism- and defense-related genes, were correlated with genes that were previously identified as diverse genes among different planarian species. Although a large number of amino acid substitutions were apparently accumulated during asexual reproduction over this long period of time, the planarian maintained normal body-shape, behaviors, and physiological functions. The results of the present study reveal a unique aspect of asexual reproduction.

  12. Lyapunov exponents for particles advected in compressible random velocity fields at small and large Kubo numbers

    CERN Document Server

    Gustavsson, K

    2013-01-01

    We calculate the Lyapunov exponents describing spatial clustering of particles advected in one- and two-dimensional random velocity fields at finite Kubo number Ku (a dimensionless parameter characterising the correlation time of the velocity field). In one dimension we obtain accurate results up to Ku ~ 1 by resummation of a perturbation expansion in Ku. At large Kubo numbers we compute the Lyapunov exponent by taking into account the fact that the particles follow the minima of the potential function corresponding to the velocity field. In two dimensions we compute the first four non-vanishing terms in the small-Ku expansion of the Lyapunov exponents. For large Kubo numbers we estimate the Lyapunov exponents by assuming that the particles sample stagnation points of the velocity field with det A > 0 and Tr A < 0 where A is the matrix of flow-velocity gradients.

  13. Similarities between 2D and 3D convection for large Prandtl number

    Indian Academy of Sciences (India)

    PANDEY AMBRISH; VERMA MAHENDRA K; CHATTERJEE ANANDO G; DUTTA BIPLAB

    2016-07-01

    Using direct numerical simulations of Rayleigh–Bénard convection (RBC), we perform a comparative study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close similarities between the 2D and 3D RBC, in particular, the kinetic energy spectrum $E^{u}(k) ∼ k^{−13/3}$, and the entropy spectrum exhibits a dual branch with a dominant $k^{−2}$ spectrum. We showed that the dominant Fourier modes in 2D and 3D flows are very close. Consequently, the 3D RBC is quasi-two-dimensional, which is the reason for the similarities between the 2D and 3D RBC for large and infinite Prandtl numbers.

  14. Large Chern-number topological superfluids in a coupled-layer system

    Science.gov (United States)

    Huang, Beibing; Chan, Chun Fai; Gong, Ming

    2015-04-01

    Large Chern-number topological phase is always an important topic in modern physics. Here we investigate the topological superfluids in a coupled-layer system, in which transitions between different topological superfluids can be realized by controlling the binding energy, interlayer tunneling, and layer asymmetry, etc. These topological transitions are characterized by energy gap closing and reopening at the critical points at zero momentum, where the Chern number and sign of Pfaffian undergo a discontinuous change. Topological protected edge modes at the boundaries are ensured by the bulk-edge correspondence. In a trapped potential the edge modes are spatially localized at the interfaces between distinct topological superfluids, where the number of edge modes is equal to the Chern-number difference between the left and right superfluids. These topological transitions can be detected by spin texture at or near zero momentum, which changes discretely across the critical points due to band inversion. The model can be generalized to a multilayer system in which the Chern number can be equal to any positive integer. These large Chern-number topological superfluids provide fertile grounds for exploring exotic quantum matters in the context of ultracold atoms.

  15. Weather extremes in very large, high-resolution ensembles: the weatherathome experiment

    Science.gov (United States)

    Allen, M. R.; Rosier, S.; Massey, N.; Rye, C.; Bowery, A.; Miller, J.; Otto, F.; Jones, R.; Wilson, S.; Mote, P.; Stone, D. A.; Yamazaki, Y. H.; Carrington, D.

    2011-12-01

    Resolution and ensemble size are often seen as alternatives in climate modelling. Models with sufficient resolution to simulate many classes of extreme weather cannot normally be run often enough to assess the statistics of rare events, still less how these statistics may be changing. As a result, assessments of the impact of external forcing on regional climate extremes must be based either on statistical downscaling from relatively coarse-resolution models, or statistical extrapolation from 10-year to 100-year events. Under the weatherathome experiment, part of the climateprediction.net initiative, we have compiled the Met Office Regional Climate Model HadRM3P to run on personal computer volunteered by the general public at 25 and 50km resolution, embedded within the HadAM3P global atmosphere model. With a global network of about 50,000 volunteers, this allows us to run time-slice ensembles of essentially unlimited size, exploring the statistics of extreme weather under a range of scenarios for surface forcing and atmospheric composition, allowing for uncertainty in both boundary conditions and model parameters. Current experiments, developed with the support of Microsoft Research, focus on three regions, the Western USA, Europe and Southern Africa. We initially simulate the period 1959-2010 to establish which variables are realistically simulated by the model and on what scales. Our next experiments are focussing on the Event Attribution problem, exploring how the probability of various types of extreme weather would have been different over the recent past in a world unaffected by human influence, following the design of Pall et al (2011), but extended to a longer period and higher spatial resolution. We will present the first results of the unique, global, participatory experiment and discuss the implications for the attribution of recent weather events to anthropogenic influence on climate.

  16. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    Science.gov (United States)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  17. Large-scale magnetic fields at high Reynolds numbers in magnetohydrodynamic simulations.

    Science.gov (United States)

    Hotta, H; Rempel, M; Yokoyama, T

    2016-03-25

    The 11-year solar magnetic cycle shows a high degree of coherence in spite of the turbulent nature of the solar convection zone. It has been found in recent high-resolution magnetohydrodynamics simulations that the maintenance of a large-scale coherent magnetic field is difficult with small viscosity and magnetic diffusivity (≲10 (12) square centimenters per second). We reproduced previous findings that indicate a reduction of the energy in the large-scale magnetic field for lower diffusivities and demonstrate the recovery of the global-scale magnetic field using unprecedentedly high resolution. We found an efficient small-scale dynamo that suppresses small-scale flows, which mimics the properties of large diffusivity. As a result, the global-scale magnetic field is maintained even in the regime of small diffusivities-that is, large Reynolds numbers.

  18. Evaluation of large-scale meteorological patterns associated with temperature extremes in the NARCCAP regional climate model simulations

    Science.gov (United States)

    Loikith, Paul C.; Waliser, Duane E.; Lee, Huikyo; Neelin, J. David; Lintner, Benjamin R.; McGinnis, Seth; Mearns, Linda O.; Kim, Jinwon

    2015-12-01

    Large-scale meteorological patterns (LSMPs) associated with temperature extremes are evaluated in a suite of regional climate model (RCM) simulations contributing to the North American Regional Climate Change Assessment Program. LSMPs are characterized through composites of surface air temperature, sea level pressure, and 500 hPa geopotential height anomalies concurrent with extreme temperature days. Six of the seventeen RCM simulations are driven by boundary conditions from reanalysis while the other eleven are driven by one of four global climate models (GCMs). Four illustrative case studies are analyzed in detail. Model fidelity in LSMP spatial representation is high for cold winter extremes near Chicago. Winter warm extremes are captured by most RCMs in northern California, with some notable exceptions. Model fidelity is lower for cool summer days near Houston and extreme summer heat events in the Ohio Valley. Physical interpretation of these patterns and identification of well-simulated cases, such as for Chicago, boosts confidence in the ability of these models to simulate days in the tails of the temperature distribution. Results appear consistent with the expectation that the ability of an RCM to reproduce a realistically shaped frequency distribution for temperature, especially at the tails, is related to its fidelity in simulating LMSPs. Each ensemble member is ranked for its ability to reproduce LSMPs associated with observed warm and cold extremes, identifying systematically high performing RCMs and the GCMs that provide superior boundary forcing. The methodology developed here provides a framework for identifying regions where further process-based evaluation would improve the understanding of simulation error and help guide future model improvement and downscaling efforts.

  19. A law of large numbers approximation for Markov population processes with countably many types

    CERN Document Server

    Barbour, A D

    2010-01-01

    When modelling metapopulation dynamics, the influence of a single patch on the metapopulation depends on the number of individuals in the patch. Since the population size has no natural upper limit, this leads to systems in which there are countably infinitely many possible types of individual. Analogous considerations apply in the transmission of parasitic diseases. In this paper, we prove a law of large numbers for rather general systems of this kind, together with a rather sharp bound on the rate of convergence in an appropriately chosen weighted $\\ell_1$ norm.

  20. Towards large genus asymtotics of intersection numbers on moduli spaces of curves

    CERN Document Server

    Mirzakhani, Maryam

    2011-01-01

    We explicitly compute the diverging factor in the large genus asymptotics of the Weil-Petersson volumes of the moduli spaces of $n$-pointed complex algebraic curves. Modulo a universal multiplicative constant we prove the existence of a complete asymptotic expansion of the Weil-Petersson volumes in the inverse powers of the genus with coefficients that are polynomials in $n$. This is done by analyzing various recursions for the more general intersection numbers of tautological classes, whose large genus asymptotic behavior is also extensively studied.

  1. A Unification of General Theory of Relativity with Dirac's Large Number Hypothesis

    Institute of Scientific and Technical Information of China (English)

    PENG Huan-Wu

    2004-01-01

    Taking a hint from Dirac's large number hypothesis, we note the existence of cosmologically combined conservation laws that work cosmologically long time. We thus modify Einstein's theory of general relativity with fixed gravitation constant G to a theory for varying G, with a tensor term arising naturally from the derivatives or G in place of the cosmological constant term usually introduced ad hoc. The modified theory, when applied to cosmology, is consistent with Dirac's large number hypothesis, and gives a theoretical Hubble's relation not contradicting the observational data.For phenomena of duration and distance being short compared with those of the universe, our theory reduces to Einstein's theory with G being constant outside the gravitating matter, and thus also passes the crucial tests of Einstein's theory.

  2. A Unification of General Theory of Relativity with Dirac's Large Number Hypothesis

    Institute of Scientific and Technical Information of China (English)

    PENGHuan-Wu

    2004-01-01

    Taking a hint from Dirac's large number hypothesis, we note the existence of cosmologically combined conservation laws that work cosmologically long time. We thus modify Einstein's theory of general relativity with fixed gravitation constant G to a theory for varying G, with a tensor term arising naturally from the derivatives of G in place of the cosmological constant term usually introduced ad hoc. The modified theory, when applied to cosmology, is consistent with Dirac's large number hypothesis, and gives a theoretical Hubble's relation not contradicting the observational data.For phenomena of duration and distance being short compared with those of the universe, our theory reduces to Einstein's theory with G being constant outside the gravitating matter, and thus also passes the crucial tests of Einstein's theory.

  3. Dynamical decoherence in a cavity with a large number of two-level atoms

    CERN Document Server

    Frasca, M

    2004-01-01

    We consider a large number of two-level atoms interacting with the mode of a cavity in the rotating-wave approximation (Tavis-Cummings model). We apply the Holstein-Primakoff transformation to study the model in the limit of the number of two-level atoms, all in their ground state, becoming very large. The unitary evolution that we obtain in this approximation is applied to a macroscopic superposition state showing that, when the coherent states forming the superposition are enough distant, then the state collapses on a single coherent state describing a classical radiation mode. This appear as a true dynamical effect that could be observed in experiments with cavities.

  4. Clustering large number of extragalactic spectra of galaxies and quasars through canopies

    CERN Document Server

    De, Tuli; Chattopadhyay, Asis Kumar

    2013-01-01

    Cluster analysis is the distribution of objects into different groups or more precisely the partitioning of a data set into subsets (clusters) so that the data in subsets share some common trait according to some distance measure. Unlike classi cation, in clustering one has to rst decide the optimum number of clusters and then assign the objects into different clusters. Solution of such problems for a large number of high dimensional data points is quite complicated and most of the existing algorithms will not perform properly. In the present work a new clustering technique applicable to large data set has been used to cluster the spectra of 702248 galaxies and quasars having 1540 points in wavelength range imposed by the instrument. The proposed technique has successfully discovered ve clusters from this 702248X1540 data matrix.

  5. Secret Sharing Schemes with a large number of players from Toric Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...... and conditions for strong multiplication using the cohomology and the intersection theory on toric surfaces....

  6. The holographic dual of a Riemann problem in a large number of dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)

    2016-08-22

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.

  7. The holographic dual of a Riemann problem in a large number of dimensions

    Science.gov (United States)

    Herzog, Christopher P.; Spillane, Michael; Yarom, Amos

    2016-08-01

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the "phase diagram" associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.

  8. Marcinkiewicz's strong law of large numbers for non-additive expectation

    OpenAIRE

    Zhang, Lixin; Lin, Jinghang

    2017-01-01

    The sub-linear expectation space is a nonlinear expectation space having advantages of modelling the uncertainty of probability and distribution. In the sub-linear expectation space, we use capacity and sub-linear expectation to replace probability and expectation of classical probability theory. In this paper, the method of selecting subsequence is used to prove Marcinkiewicz type strong law of large numbers under sub-linear expectation space. This result is a natural extension of the classi...

  9. Periodic response of nonlinear dynamical system with large number of degrees of freedom

    Indian Academy of Sciences (India)

    B P Patel; S M Ibrahim; Y Nath

    2009-12-01

    In this paper, a methodology based on shooting technique and Newmark's time integration scheme is proposed for predicting the periodic responses of nonlinear systems directly from solution of second order equations of motion without transforming to double first order equations. The proposed methodology is quite suitable for systems with large number of degrees of freedom such as the banded system of equations from finite element discretization.

  10. Stimulated excitation of resonant Cherenkov radiation at a large number of neighbouring waveguide modes

    CERN Document Server

    Grigoryan, L Sh; Khachatryan, H F; Grigoryan, M L

    2012-01-01

    The resonance Cherenkov radiation generated from a train of equally-spaced unidimensional electron bunches travelling along the axis of a hollow channel inside an infinite cylindrical waveguide filled with (weakly dispersing) transparent dielectric has been investigated. It was shown that its excitation might be stimulated at a large number of neighboring modes of the waveguide. A visual explanation of this effect is given and the possibility of its observation in the range of terahertz radiation is discussed.

  11. Space experimental device on Marangoni drop migrations of large Reynolds numbers

    Institute of Scientific and Technical Information of China (English)

    张璞; 胡良; 刘方; 姚永龙; 解京昌; 林海; 胡文瑞

    2001-01-01

    The space experimental device for testing the Marangoni drop migrations has been discussed in the present paper. The experiment is one of the spaceship projects of China. In comparison with similar devices, it has the ability of completing all the scientific experiments by both auto controlling and telescience methods. It not only can perform drop migration experiments of large Reynolds numbers but also has an equi-thick interferential system.

  12. The holographic dual of a Riemann problem in a large number of dimensions

    OpenAIRE

    Herzog, Christopher; Spillane, Michael; Yarom, Amos

    2016-01-01

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the "phase diagram" associated with the steady state; the dual, dynamical, black hole description of this problem; and its relation to the fluid/gravity correspondence.

  13. An autostereoscopic display with high resolution and large number of view zones

    Science.gov (United States)

    Chen, Wu-Li; Hsu, Wei-Liang; Tsai, Chao-Hsu; Wang, Chy-Lin; Wu, Chang-Shuo; Yang, Jinn-Cherng; Cheng, Shu-Chuan

    2008-02-01

    For a spatial-multiplexed 3D display, trade-off between resolution and number of view-zones are usually unavoidable due to the limited number of pixels on the screen. In this paper, we present a new autostereoscopic system, named as "integrated-screen system," to substantially increase the total number of pixels on the screen, which in turn increase both the resolution and number of view-zones. In the integrated-screen system, a large number of mini-projectors are arrayed and the images are tiled together without seams in between. For displaying 3D images, the lenticular screen with predesigned tilted angle is used for distributing different viewing zones. In order to achieve good performance, we design a brand-new projector with special lens set to meet the low-distortion requirement because the distortion of the image will induce serious crosstalk between view-zones. The proposed system has two advantages. One is the extensibility of the screen size. The size of the display can be chosen based on the applications we deal with, including the size of the projected pixel and the number of viewing zones. The other advantage is that the integrated-screen system provides projected pixels in great density to solve the major problem of the poor resolution that a lenticular-type 3D display has.

  14. Gallstone ileus of the sigmoid colon: an extremely rare cause of large bowel obstruction detected by multiplanar CT.

    Science.gov (United States)

    Carlsson, Tarryn; Gandhi, Sanjay

    2015-12-18

    Gallstone ileus of the sigmoid colon is an important, though extremely rare, cause of large bowel obstruction. The gallstone often enters the large bowel through a fistula formation between the gallbladder and colon, and impacts at a point of narrowing, causing large bowel obstruction. We describe the case of an 80-year-old woman who presented with features of bowel obstruction. Multiplanar abdominal CT demonstrated a cholecystocolonic fistula in exquisite detail. The scan also showed obstruction of the colon due to a large gallstone impacted just proximal to a stricture in the sigmoid. Owing to inflammatory adhesions and a stricture from extensive diverticular disease, the gallstone could not be retrieved. This frail and elderly woman was treated with a loop colostomy to relieve bowel obstruction. The patient made an uneventful recovery.

  15. Pyro-Align: Sample-Align based Multiple Alignment system for Pyrosequencing Reads of Large Number

    CERN Document Server

    Saeed, Fahad

    2009-01-01

    Pyro-Align is a multiple alignment program specifically designed for pyrosequencing reads of huge number. Multiple sequence alignment is shown to be NP-hard and heuristics are designed for approximate solutions. Multiple sequence alignment of pyrosequenceing reads is complex mainly because of 2 factors. One being the huge number of reads, making the use of traditional heuristics,that scale very poorly for large number, unsuitable. The second reason is that the alignment cannot be performed arbitrarily, because the position of the reads with respect to the original genome is important and has to be taken into account.In this report we present a short description of the multiple alignment system for pyrosequencing reads.

  16. Large-eddy simulation of high-Schmidt number mass transfer in a turbulent channel flow

    Science.gov (United States)

    Calmet, Isabelle; Magnaudet, Jacques

    1997-02-01

    Mass transfer through the solid boundary of a turbulent channel flow is analyzed by means of large-eddy simulation (LES) for Schmidt numbers Sc=1, 100, and 200. For that purpose the subgrid stresses and fluxes are closed using the Dynamic Mixed Model proposed by Zang et al. [Phys. Fluids A 5, 3186 (1993)]. At each Schmidt number the mass transfer coefficient given by the LES is found to be in very good quantitative agreement with that measured in the experiments. At high Schmidt number this coefficient behaves like Sc-2/3, as predicted by standard theory and observed in most experiments. The main statistical characteristics of the fluctuating concentration field are analyzed in connection with the well-documented statistics of the turbulent motions. It is observed that concentration fluctuations have a significant intensity throughout the channel at Sc=1 while they are negligible out of the wall region at Sc=200. The maximum intensity of these fluctuations depends on both the Schmidt and Reynolds numbers and is especially influenced by the intensity of the velocity fluctuations present in the buffer layer of the concentration field. At Sc=1, strong similarities are observed between the various terms contributing to the turbulent kinetic energy budget and their counterpart in the budget of the variance of concentration fluctuations. At high Schmidt number, the latter budget is much more influenced by the small turbulent structures subsisting in the viscous sublayer. The instantaneous correlation between the spatial characteristics of the concentration field and those of the velocity field is clearly demonstrated by the presence of low- and high-concentration streaks close to the wall. The geometrical characteristics of these structures are found to be highly Sc dependent. In particular their spanwise wavelength is identical to that of the streamwise velocity streaks at Sc=1 while it is reduced by half at Sc=200. Analysis of the co-spectra between concentration and

  17. Large scale dynamics in a turbulent compressible rotor/stator cavity flow at high Reynolds number

    Science.gov (United States)

    Lachize, C.; Verhille, G.; Le Gal, P.

    2016-08-01

    This paper reports an experimental investigation of a turbulent flow confined within a rotor/stator cavity of aspect ratio close to unity at high Reynolds number. The experiments have been driven by changing both the rotation rate of the disk and the thermodynamical properties of the working fluid. This fluid is sulfur hexafluoride (SF6) whose physical properties are adjusted by imposing the operating temperature and the absolute pressure in a pressurized vessel, especially near the critical point of SF6 reached for T c = 45.58 ◦C, P c = 37.55 bar. This original set-up allows to obtain Reynolds numbers as high as 2 × 107 together with compressibility effects as the Mach number can reach 0.5. Pressure measurements reveal that the resulting fully turbulent flow shows both a direct and an inverse cascade as observed in rotating turbulence and in accordance with Kraichnan conjecture for 2D-turbulence. The spectra are however dominated by low-frequency peaks, which are subharmonics of the rotating disk frequency, involving large scale structures at small azimuthal wavenumbers. These modes appear for a Reynolds number around 105 and experience a transition at a critical Reynolds number Re c ≈ 106. Moreover they show an unexpected nonlinear behavior that we understand with the help of a low dimensional amplitude equations.

  18. Extreme maximum temperature events and their relationships with large-scale modes: potential hazard on the Iberian Peninsula

    Science.gov (United States)

    Merino, Andrés; Martín, M. L.; Fernández-González, S.; Sánchez, J. L.; Valero, F.

    2017-07-01

    The aim of this paper is to analyze spatiotemporal distribution of maximum temperatures in the Iberian Peninsula (IP) by using various extreme maximum temperature indices. Thresholds for determining temperature extreme event (TEE) severity are defined using 99th percentiles of daily temperature time series for the period 1948 to 2009. The synoptic-scale fields of such events were analyzed in order to better understand the related atmospheric processes. The results indicate that the regions with a higher risk of maximum temperatures are located in the river valleys of southwest and northeast of the IP, while the Cantabrian coast and mountain ranges are characterized by lower risk. The TEEs were classified, by means of several synoptic fields (sea level pressure, temperature, and geopotential height at 850 and 500 hPa), in four clusters that largely explain their spatiotemporal distribution on the IP. The results of this study show that TEEs mainly occur associated with a ridge elongated from Subtropical areas. The relationships of TEEs with teleconnection patterns, such as the North Atlantic Oscillation (NAO), Western Mediterranean Oscillation (WeMO), and Mediterranean Oscillation (MO), showed that the interannual variability of extreme maximum temperatures is largely controlled by the dominant phase of WeMO in all seasons except wintertime where NAO is prevailing. Results related to MO pattern show less relevance in the maximum temperatures variability. The correct identification of synoptic patterns linked with the most extreme temperature event associated with each cluster will assist the prediction of events that can pose a natural hazard, thereby providing useful information for decision making and warning systems.

  19. The genus Drosophila is characterized by a large number of sibling species showing evolutionary significance

    Indian Academy of Sciences (India)

    BASHISTH N. SINGH

    2016-12-01

    Mayr (1942) defined sibling species as sympatric forms which are morphologically very similar or indistinguishable, but which possess specific biological characteristics and are reproductively isolated. Another term, cryptic species has also been used for such species. However, this concept changed later. Sibling species are as similar as twins. This category does not necessarily include phylogenetic siblings as members of a superspecies. Since the term sibling species was defined by Mayr, a large number of cases of sibling species pairs/groups have been reported and thus they are widespread in the animal kingdom.However, they seem to be more common in some groups such as insects. In insects, they have been reported in diptera, lepidoptera, coleoptera, orthoptera, hymenoptera and others. Sibling species are widespread among the dipteran insects and as such are well studied because some species are important medically (mosquitoes), genetically (Drosophila) and cytologically(Sciara and Chironomus). The well-studied classical pairs of sibling species in Drosophila are: D. pseudoobscura and D. persimilis, and D. melanogaster and D. simulans. Subsequently, a number of sibling species have been added to these pairs and a large number of other sibling species pairs/groups in different species groups of the genus Drosophila have been reported in literature. The present review briefly summarizes the cases of sibling species pairs/groups in the genus Drosophila with their evolutionary significance.

  20. Optimizing of large-number-patterns string matching algorithms based on definite-state automata

    Institute of Scientific and Technical Information of China (English)

    CHEN Xun-xun; FANG Bin-xing

    2007-01-01

    Because the small CACHE size of computers, the scanning speed of DFA based multi-pattern stringmatching algorithms slows down rapidly especially when the number of patterns is very large. For solving such problems, we cut down the scanning time of those algorithms (i.e. DFA based) by rearranging the states table and shrinking the DFA alphabet size. Both the methods can decrease the probability of large-scale random memory accessing and increase the probability of continuously memory accessing. Then the hitting rate of the CACHE is increased and the searching time of on the DFA is reduced. Shrinking the alphabet size of the DFA also reduces the storage complication. The AC + + algorithm, by optimizing the Aho-Corasick ( i. e. AC) algorithm using such methods, proves the theoretical analysis. And the experimentation results show that the scanning time of AC + + and the storage occupied is better than that of AC in most cases and the result is much attractive when the number of patterns is very large. Because DFA is a widely used base algorithm in may string matching algorithms, such as DAWG, SBOM etc. , the optimizing method discussed is significant in practice.

  1. Large area UV SiPMs with extremely low cross-talk

    Science.gov (United States)

    Dolgoshein, B.; Mirzoyan, R.; Popova, E.; Buzhan, P.; Ilyin, A.; Kaplin, V.; Stifutkin, A.; Teshima, M.; Zhukov, A.

    2012-12-01

    For about ten years the collaboration MEPhI-Max Plank Institute for Physics in Munich has been developing SiPMs for the MAGIC and EUSO astro-particle physics experiments. The aim was to develop UV sensitive sensors of very high photon detection efficiency, substantially exceeding that of the classical photo multiplier tubes. For very high photo detection efficiency one needs to operate SiPM under the highest Geiger efficiency, i.e., to apply a high over-voltage. This means operating SiPM under high gain that in its turn produces a very high cross-talk. For suppressing the latter adverse effect we used isolating trenches and a second p-n junction, but also special implantation profiles and layers. We produced UV sensitive SiPMs of sizes 1 mm×1 mm and 3 mm×3 mm showing a peak Photon Detection Efficiency in the range of 50-60% at a cross-talk level of only 3-5%. One of the outstanding features of the new SiPM is their extremely low sensitivity of gain to temperature variations, amounting to 0.5%/°C. Below we report on new SiPMs.

  2. Floating sphere telescope: a new design for a 40-m Extremely Large Telescope

    Science.gov (United States)

    Marchiori, Gianpietro; Rampini, Francesco

    2006-06-01

    This paper work reports the results of the Preliminary Design Phase of the Floating Sphere Telescope that has been presented during the AOMATT in Xi'an, China, November 2005. The FST represents a new design for the realization of an ELT with a 40-metre primary mirror. The innovative concept of the structure and the sub-systems that constitute it as well as the use of new materials and technologies allow to obtain an instrument able to comply with very extreme specifications for structure such as ELTs. The structure allows to improve the stiffness to weight ratio of the structure, to introduce higher damping while maintaining under control the construction and maintenance costs. In comparison with the previous study, the following steps have been implemented: • Refining and optimizing the structural design and the FEA model, in particular we have included a realistic model of the constraint provided by the fluid used for flotation by characterization of its viscous and elastic properties in order to estimate the additional modal damping introduced by the flotation as function of fluid properties and geometry. • Designed (and introduced in the FEA model) various types of drives such as friction drives, tensioned ropes in "hexapod" configuration, "gravity" drives (moving ballast) and combinations of them to evaluate potential tracking performances • Designed the necessary connections for various types of utilities (power, data, cooling) • Included in the structural design a more elaborate optical design to satisfy specific science requirements (e.g. multiconjugate AO)

  3. A PROCESS FOR SOLVING A FEW EXTREME EIGENPAIRS OF LARGE SPARSE POSITIVE DEFINITE GENERALIZED EIGENVALUE PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Chong-hua Yu; O. Axelsson

    2000-01-01

    In this paper, an algorithm for computing some of the largest (smallest) generalized eigenvalues with corresponding eigenvectors of a sparse symmetric positive definite matrix pencil is presented. The algorithm uses an iteration function and inverse power iteration process to get the largest one first, then executes m-1Lanczos-like steps to get initial approximations of the next m - 1 ones, without computing any Ritz pair, for which a procedure combining Rayleigh quotient iteration with shifted inverse power iteration is used to obtain more accurate eigenvalues and eigenvectors. This algorithm keeps the advantages of preserving sparsity of the original matrices as in Lanczos method and RQI and converges with a higher rate than the method described in[12] and provides a simple technique to compute initial approximate pairs which are guaranteed to converge to the wanted m largest eigenpairs using RQI. In addition, it avoids some of the disadvantages of Lanczos and RQI, for solving extreme eigenproblems. When symmetric positive definfite linear systems must be solved in the process, an algebraic multilevel iteration method (AMLI) is applied. The algorithm is fully parallelizable.

  4. Combining large model ensembles with extreme value statistics to improve attribution statements of rare events

    Directory of Open Access Journals (Sweden)

    Sebastian Sippel

    2015-09-01

    In conclusion, our study shows that EVT and empirical estimates based on numerical simulations can indeed be used to productively inform each other, for instance to derive appropriate EVT parameters for short observational time series. Further, the combination of ensemble simulations with EVT allows us to significantly reduce the number of simulations needed for statements about the tails.

  5. Analogue algorithm for parallel factorization of an exponential number of large integers: I. Theoretical description

    Science.gov (United States)

    Tamma, Vincenzo

    2016-12-01

    We describe a novel analogue algorithm that allows the simultaneous factorization of an exponential number of large integers with a polynomial number of experimental runs. It is the interference-induced periodicity of "factoring" interferograms measured at the output of an analogue computer that allows the selection of the factors of each integer. At the present stage, the algorithm manifests an exponential scaling which may be overcome by an extension of this method to correlated qubits emerging from n-order quantum correlations measurements. We describe the conditions for a generic physical system to compute such an analogue algorithm. A particular example given by an "optical computer" based on optical interference will be addressed in the second paper of this series (Tamma in Quantum Inf Process 11128:1189, 2015).

  6. Large dynamic range operation of ultra-higher number MWFLs affected by MZI-SI

    Science.gov (United States)

    Narimah Aziz, Siti; Arsad, Norhana; Ashrif Abu Bakar, Ahmad; Sushita Menon, P.; Shaari, Sahbudin

    2016-11-01

    This report presents a large dynamic operation of wavelength numbers of laser lines that have been periodically filtered using an MZI-SI filter effect. A 70 nm span range for a wider periodic comb filter with 0.60 nm wavelength spacing was achieved through an advanced triple-loop ring-cavity fiber laser with a combination of MZI-SI. Almost all of the best 95 numbers of wavelengths are flattened at 6 dB of peak power fluctuation in a 52 nm range. By adjusting the rotation angles of a polarization controller (PC), the ultra-wide range multiwavelength spectrum has been shifted by 36 nm in a range from 1522.8-1558.6 nm.

  7. Estimating the effective Reynolds number in implicit large-eddy simulation.

    Science.gov (United States)

    Zhou, Ye; Grinstein, Fernando F; Wachtor, Adam J; Haines, Brian M

    2014-01-01

    In implicit large-eddy simulation (ILES), energy-containing large scales are resolved, and physics capturing numerics are used to spatially filter out unresolved scales and to implicitly model subgrid scale effects. From an applied perspective, it is highly desirable to estimate a characteristic Reynolds number (Re)-and therefore a relevant effective viscosity-so that the impact of resolution on predicted flow quantities and their macroscopic convergence can usefully be characterized. We argue in favor of obtaining robust Re estimates away from the smallest scales of the simulated flow-where numerically controlled dissipation takes place and propose a theoretical basis and framework to determine such measures. ILES examples include forced turbulence as a steady flow case, the Taylor-Green vortex to address transition and decaying turbulence, and simulations of a laser-driven reshock experiment illustrating a fairly complex turbulence problem of current practical interest.

  8. Extremely large bandwidth and ultralow-dispersion slow light in photonic crystal waveguides with magnetically controllability

    DEFF Research Database (Denmark)

    Pu, Shengli; Wang, Haotian; Wang, Ning;

    2013-01-01

    A line-defect waveguide within a two-dimensional magnetic-fluid-based photonic crystal with 45o-rotated square lattice is presented to have excellent slow light properties. The bandwidth centered at $$ \\lambda_{0} $$ = 1,550 nm of our designed W1 waveguide is around 66 nm, which is very large tha...

  9. DISCOVERY OF MASSIVE, MOSTLY STAR FORMATION QUENCHED GALAXIES WITH EXTREMELY LARGE Lyα EQUIVALENT WIDTHS AT z ∼ 3

    Energy Technology Data Exchange (ETDEWEB)

    Taniguchi, Yoshiaki; Kajisawa, Masaru; Kobayashi, Masakazu A. R.; Nagao, Tohru; Shioya, Yasuhiro [Research Center for Space and Cosmic Evolution, Ehime University, Bunkyo-cho, Matsuyama 790-8577 (Japan); Scoville, Nick Z.; Capak, Peter L. [Department of Astronomy, California Institute of Technology, MS 105-24, Pasadena, CA 91125 (United States); Sanders, David B. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Koekemoer, Anton M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Toft, Sune [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Mariesvej 30, DK-2100 Copenhagen (Denmark); McCracken, Henry J. [Institut d’Astrophysique de Paris, UMR7095 CNRS, Université Pierre et Marie Curie, 98 bis Boulevard Arago, F-75014 Paris (France); Le Fèvre, Olivier; Tasca, Lidia; Ilbert, Olivier [Aix Marseille Université, CNRS, LAM (Laboratoire d’Astrophysique de Marseille), UMR 7326, F-13388 Marseille (France); Sheth, Kartik [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Renzini, Alvio [Dipartimento di Astronomia, Universita di Padova, vicolo dell’Osservatorio 2, I-35122 Padua (Italy); Lilly, Simon; Carollo, Marcella; Kovač, Katarina [Department of Physics, ETH Zurich, 8093 Zurich (Switzerland); Schinnerer, Eva, E-mail: tani@cosmos.phys.sci.ehime-u.ac.jp [MPI for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); and others

    2015-08-10

    We report a discovery of six massive galaxies with both extremely large Lyα equivalent widths (EWs) and evolved stellar populations at z ∼ 3. These MAssive Extremely STrong Lyα emitting Objects (MAESTLOs) have been discovered in our large-volume systematic survey for strong Lyα emitters (LAEs) with 12 optical intermediate-band data taken with Subaru/Suprime-Cam in the COSMOS field. Based on the spectral energy distribution fitting analysis for these LAEs, it is found that these MAESTLOs have (1) large rest-frame EWs of EW{sub 0} (Lyα) ∼ 100–300 Å, (2) M{sub ⋆} ∼ 10{sup 10.5}–10{sup 11.1} M{sub ⊙}, and (3) relatively low specific star formation rates of SFR/M{sub ⋆} ∼ 0.03–1 Gyr{sup −1}. Three of the six MAESTLOs have extended Lyα emission with a radius of several kiloparsecs, although they show very compact morphology in the HST/ACS images, which correspond to the rest-frame UV continuum. Since the MAESTLOs do not show any evidence for active galactic nuclei, the observed extended Lyα emission is likely to be caused by a star formation process including the superwind activity. We suggest that this new class of LAEs, MAESTLOs, provides a missing link from star-forming to passively evolving galaxies at the peak era of the cosmic star formation history.

  10. Discovery of Massive, Mostly Star-formation Quenched Galaxies with Extremely Large Lyman-alpha Equivalent Widths at z ~ 3

    CERN Document Server

    Taniguchi, Yoshiaki; Kobayashi, Masakazu A R; Nagao, Tohru; Shioya, Yasuhiro; Scoville, Nick Z; Sanders, David B; Capak, Peter L; Koekemoer, Anton M; Toft, Sune; McCracken, Henry J; Fevre, Olivier Le; Tasca, Lidia; Sheth, Kartik; Renzini, Alvio; Lilly, Simon; Carollo, Marcella; Kovac, Katarina; Ilbert, Olivier; Schinnerer, Eva; Fu, Hai; Tresse, Laurence; Griffiths, Richard E; Civano, Francesca

    2015-01-01

    We report a discovery of 6 massive galaxies with both extremely large Lya equivalent width and evolved stellar population at z ~ 3. These MAssive Extremely STrong Lya emitting Objects (MAESTLOs) have been discovered in our large-volume systematic survey for strong Lya emitters (LAEs) with twelve optical intermediate-band data taken with Subaru/Suprime-Cam in the COSMOS field. Based on the SED fitting analysis for these LAEs, it is found that these MAESTLOs have (1) large rest-frame equivalent width of EW_0(Lya) ~ 100--300 A, (2) M_star ~ 10^10.5--10^11.1 M_sun, and (3) relatively low specific star formation rates of SFR/M_star ~ 0.03--1 Gyr^-1. Three of the 6 MAESTLOs have extended Ly$\\alpha$ emission with a radius of several kpc although they show very compact morphology in the HST/ACS images, which correspond to the rest-frame UV continuum. Since the MAESTLOs do not show any evidence for AGNs, the observed extended Lya emission is likely to be caused by star formation process including the superwind activit...

  11. Observation of an Extremely Large-Density Heliospheric Plasma Sheet Compressed by an Interplanetary Shock at 1 AU

    Science.gov (United States)

    Wu, Chin-Chun; Liou, Kan; Lepping, R. P.; Vourlidas, Angelos; Plunkett, Simon; Socker, Dennis; Wu, S. T.

    2017-08-01

    At 11:46 UT on 9 September 2011, the Wind spacecraft encountered an interplanetary (IP) fast-forward shock. The shock was followed almost immediately by a short-duration (˜ 35 minutes) extremely dense pulse (with a peak ˜ 94 cm-3). The pulse induced an extremely large positive impulse (SYM-H = 74 nT and Dst = 48 nT) on the ground. A close examination of other in situ parameters from Wind shows that the density pulse was associated with i) a spike in the plasma β (ratio of thermal to magnetic pressure), ii) multiple sign changes in the azimuthal component of the magnetic field (B_{φ}), iii) a depressed magnetic field magnitude, iv) a small radial component of the magnetic field, and v) a large (> 90°) change in the suprathermal (˜ 255 eV) electron pitch angle across the density pulse. We conclude that the density pulse is associated with the heliospheric plasma sheet (HPS). The thickness of the HPS is estimated to be {˜} 8.2×105 km. The HPS density peak is about five times the value of a medium-sized density peak inside the HPS (˜ 18 cm-3) at 1 AU. Our global three-dimensional magnetohydrodynamic simulation results (Wu et al. in J. Geophys. Res. 212, 1839, 2016) suggest that the extremely large density pulse may be the result of the compression of the HPS by an IP shock crossing or an interaction between an interplanetary shock and a corotating interaction region.

  12. Law of large numbers for non-elliptic random walks in dynamic random environments

    CERN Document Server

    Hollander, Frank den; Sidoravicius, Vladas

    2011-01-01

    We prove a law of large numbers for a class of $\\Z^d$-valued random walks in dynamic random environments, including \\emph{non-elliptic} examples. We assume that the random environment has a mixing property called \\emph{conditional cone-mixing} and that the random walk tends to stay inside space-time cones. The proof is based on a generalization of the regeneration scheme developed by Comets and Zeitouni for static random environments, which was adapted by Avena, den Hollander and Redig to dynamic random environments. We exhibit some one-dimensional examples to which our result applies. In some cases, the sign of the speed can be determined.

  13. Impact factors for Reggeon-gluon transition in N = 4 SYM with large number of colours

    CERN Document Server

    Fadin, Victor S

    2014-01-01

    We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang-Mills theory with four supercharges at large number of colours Nc. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Moebius transformation in momentum space, and show that it is also Moebius invariant up to terms taken into account in the BDS ansatz.

  14. Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours

    Energy Technology Data Exchange (ETDEWEB)

    Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)

    2014-06-27

    We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.

  15. Exponential inequalities for associated random variables and strong laws of large numbers

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Some exponential inequalities for partial sums of associated random variables are established. These inequalities improve the corresponding results obtained by Ioannides and Roussas (1999), and Oliveira (2005). As application, some strong laws of large numbers are given. For the case of geometrically decreasing covariances, we obtain the rate of convergence n-1/2(log log n)1/2(logn) which is close to the optimal achievable convergence rate for independent random variables under an iterated logarithm, while Ioannides and Roussas (1999), and Oliveira (2005) only got n-1/3(logn)2/3 and n-1/3(logn)5/3, separately.

  16. Exponential inequalities for associated random variables and strong laws of large numbers

    Institute of Scientific and Technical Information of China (English)

    Shan-chao YANG; Min CHEN

    2007-01-01

    Some exponential inequalities for partial sums of associated random variables are established. These inequalities improve the corresponding results obtained by Ioannides and Roussas (1999), and Oliveira (2005). As application, some strong laws of large numbers are given. For the case of geometrically decreasing covariances, we obtain the rate of convergence n-1/2(log log n)1/2(log n) which is close to the optimal achievable convergence rate for independent random variables under an iterated logarithm, while Ioannides and Roussas (1999), and Oliveira (2005) only got n-1/3 (log n)2/3 and n-1/3 (log n)5/3, separately.

  17. A note on strong law of large numbers of random variables

    Institute of Scientific and Technical Information of China (English)

    LIN Zheng-yan; SHEN Xin-mei

    2006-01-01

    In this paper, the Chung's strong law of large numbers is generalized to the random variables which do not need the condition of independence, while the sequence of Borel functions verifies some conditions weaker than that in Chung's theorem.Some convergence theorems for martingale difference sequence such as Lp martingale difference sequence are the particular cases of results achieved in this paper. Finally, the convergence theorem for A-summability of sequence of random variables is proved,where A is a suitable real infinite matrix.

  18. On the Linkage between the Extreme Drought and Pluvial Patterns in China and the Large-Scale Atmospheric Circulation

    Directory of Open Access Journals (Sweden)

    Zengxin Zhang

    2016-01-01

    Full Text Available China is a nation that is affected by a multitude of natural disasters, including droughts and floods. In this paper, the variations of extreme drought and pluvial patterns and their relations to the large-scale atmospheric circulation have been analyzed based on monthly precipitation data from 483 stations during the period 1958–2010 in China. The results show the following: (1 the extreme drought and pluvial events in China increase significantly during that period. During 1959–1966 timeframe, more droughts occur in South China and more pluvial events are found in North China (DSC-PNC pattern; as for the period 1997–2003 (PSC-DNC pattern, the situation is the opposite. (2 There are good relationships among the extreme drought and pluvial events and the Western Pacific Subtropical High, meridional atmospheric moisture flux, atmospheric moisture content, and summer precipitation. (3 A cyclone atmospheric circulation anomaly occurs in North China, followed by an obvious negative height anomaly and a southern wind anomaly at 850 hPa and 500 hPa for the DSC-PNC pattern during the summer, and a massive ascending airflow from South China extends to North China at ~50∘N. As for the PSC-DNC pattern, the situation contrasts sharply with the DSC-PNC pattern.

  19. Global and Regional Variations in Mean Temperature and Warm Extremes in Large-Member Historical AGCM Simulation

    Science.gov (United States)

    Kamae, Y.; Shiogama, H.; Imada, Y.; Mori, M.; Arakawa, O.; Mizuta, R.; Yoshida, K.; Ishii, M.; Watanabe, M.; Kimoto, M.; Ueda, H.

    2015-12-01

    Frequency of heat extremes during the summer season has increased continuously since the late 20th century despite the global warming hiatus. In previous studies, anthropogenic influences, natural variation in sea surface temperature (SST), and internal atmospheric variabilities are suggested to be factors contributing to the increase in the frequency of warm extremes. Here 100-member ensemble historical simulations were performed (called "database for Probabilistic Description of Future climate"; d4PDF) to examine physical mechanisms responsible for the increasing hot summers and attribute to the anthropogenic influences or natural climate variability. 60km resolution MRI-AGCM ensemble simulations can reproduce historical variations in the mean temperature and warm extremes. Natural SST variability in the Pacific and Atlantic Oceans contribute to the decadal variation in the frequency of hot summers in the Northern Hemisphere middle latitude. For example, the surface temperature over western North America, including California, is largely influenced by anomalous atmospheric circulation pattern associated with Pacific SST variability. Future projections based on anomalous SST patterns derived from coupled climate model simulations will also be introduced.

  20. Variability of rRNA Operon Copy Number and Growth Rate Dynamics of Bacillus Isolated from an Extremely Oligotrophic Aquatic Ecosystem.

    Science.gov (United States)

    Valdivia-Anistro, Jorge A; Eguiarte-Fruns, Luis E; Delgado-Sapién, Gabriela; Márquez-Zacarías, Pedro; Gasca-Pineda, Jaime; Learned, Jennifer; Elser, James J; Olmedo-Alvarez, Gabriela; Souza, Valeria

    2015-01-01

    The ribosomal RNA (rrn) operon is a key suite of genes related to the production of protein synthesis machinery and thus to bacterial growth physiology. Experimental evidence has suggested an intrinsic relationship between the number of copies of this operon and environmental resource availability, especially the availability of phosphorus (P), because bacteria that live in oligotrophic ecosystems usually have few rrn operons and a slow growth rate. The Cuatro Ciénegas Basin (CCB) is a complex aquatic ecosystem that contains an unusually high microbial diversity that is able to persist under highly oligotrophic conditions. These environmental conditions impose a variety of strong selective pressures that shape the genome dynamics of their inhabitants. The genus Bacillus is one of the most abundant cultivable bacterial groups in the CCB and usually possesses a relatively large number of rrn operon copies (6-15 copies). The main goal of this study was to analyze the variation in the number of rrn operon copies of Bacillus in the CCB and to assess their growth-related properties as well as their stoichiometric balance (N and P content). We defined 18 phylogenetic groups within the Bacilli clade and documented a range of from six to 14 copies of the rrn operon. The growth dynamic of these Bacilli was heterogeneous and did not show a direct relation to the number of operon copies. Physiologically, our results were not consistent with the Growth Rate Hypothesis, since the copies of the rrn operon were decoupled from growth rate. However, we speculate that the diversity of the growth properties of these Bacilli as well as the low P content of their cells in an ample range of rrn copy number is an adaptive response to oligotrophy of the CCB and could represent an ecological mechanism that allows these taxa to coexist. These findings increase the knowledge of the variability in the number of copies of the rrn operon in the genus Bacillus and give insights about the

  1. Rain Characteristics and Large-Scale Environments of Precipitation Objects with Extreme Rain Volumes from TRMM Observations

    Science.gov (United States)

    Zhou, Yaping; Lau, William K M.; Liu, Chuntao

    2013-01-01

    This study adopts a "precipitation object" approach by using 14 years of Tropical Rainfall Measuring Mission (TRMM) Precipitation Feature (PF) and National Centers for Environmental Prediction (NCEP) reanalysis data to study rainfall structure and environmental factors associated with extreme heavy rain events. Characteristics of instantaneous extreme volumetric PFs are examined and compared to those of intermediate and small systems. It is found that instantaneous PFs exhibit a much wider scale range compared to the daily gridded precipitation accumulation range. The top 1% of the rainiest PFs contribute over 55% of total rainfall and have 2 orders of rain volume magnitude greater than those of the median PFs. We find a threshold near the top 10% beyond which the PFs grow exponentially into larger, deeper, and colder rain systems. NCEP reanalyses show that midlevel relative humidity and total precipitable water increase steadily with increasingly larger PFs, along with a rapid increase of 500 hPa upward vertical velocity beyond the top 10%. This provides the necessary moisture convergence to amplify and sustain the extreme events. The rapid increase in vertical motion is associated with the release of convective available potential energy (CAPE) in mature systems, as is evident in the increase in CAPE of PFs up to 10% and the subsequent dropoff. The study illustrates distinct stages in the development of an extreme rainfall event including: (1) a systematic buildup in large-scale temperature and moisture, (2) a rapid change in rain structure, (3) explosive growth of the PF size, and (4) a release of CAPE before the demise of the event.

  2. Spatial variation in particle number size distributions in a large metropolitan area

    Directory of Open Access Journals (Sweden)

    J. F. Mejía

    2007-11-01

    Full Text Available Air quality studies have indicated that particle number size distribution (NSD is unevenly spread in urban air. To date, these studies have focussed on differences in concentration levels between sampling locations rather than differences in the underlying geometries of the distributions. As a result, the existing information on the spatial variation of the NSD in urban areas remains incomplete. To investigate this variation in a large metropolitan area in the southern hemisphere, NSD data collected at nine different locations during different campaigns of varying duration were compared using statistical methods. The spectra were analysed in terms of their modal structures (the graphical representation of the number size distribution function, cumulative distribution and number median diameter (NMD. The study found that with the exception of one site all distributions were bimodal or suggestive of bimodality. In general, peak concentrations were below 30 nm and NMDs below 50 nm, except at a site dominated by diesel trucks, where it shifted to around 50 and 60 nm respectively. Ultrafine particles (UFPs contributed to 82–90% of the particle number, nanoparticles (<50 nm to around 60–70%, except at the diesel traffic site, where their contribution dropped to 50%. Statistical analyses found that the modal structures heterogeneously distributed throughout Brisbane whereas it was not always the case for the NMD. The discussion led to the following site classification: (1 urban sites dominated by petrol traffic, (2 urban sites affected by the proximity to the road and (3 an isolated site dominated by diesel traffic. Comparisons of weekday and weekend data indicated that, the distributions were not statistically different. The only exception occurred at one site, where there is a significant drop in the number of diesel buses on the weekend. The differences in sampling period between sites did not affect the results. The statistics instead suggested

  3. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    Energy Technology Data Exchange (ETDEWEB)

    Khare, Avinash [Raja Ramanna Fellow, Indian Institute of Science Education and Research (IISER), Pune 411021 (India); Saxena, Avadh [Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

    2014-03-15

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ{sup 4}, the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn{sup 2}(x, m), it also admits solutions in terms of dn {sup 2}(x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations.

  4. An Efficient Approach to Obtaining Large Numbers of Distant Supernova Host Galaxy Redshifts

    CERN Document Server

    Lidman, C; Sullivan, M; Myzska, J; Dobbie, P; Glazebrook, K; Mould, J; Astier, P; Balland, C; Betoule, M; Carlberg, R; Conley, A; Fouchez, D; Guy, J; Hardin, D; Hook, I; Howell, D A; Pain, R; Palanque-Delabrouille, N; Perrett, K; Pritchet, C; Regnault, N; Rich, J

    2012-01-01

    We use the wide-field capabilities of the 2dF fibre positioner and the AAOmega spectrograph on the Anglo-Australian Telescope (AAT) to obtain redshifts of galaxies that hosted supernovae during the first three years of the Supernova Legacy Survey (SNLS). With exposure times ranging from 10 to 60 ksec per galaxy, we were able to obtain redshifts for 400 host galaxies in two SNLS fields, thereby substantially increasing the total number of SNLS supernovae with host galaxy redshifts. The median redshift of the galaxies in our sample that hosted photometrically classified Type Ia supernovae (SNe Ia) is 0.77, which is 25% higher than the median redshift of spectroscopically confirmed SNe Ia in the three-year sample of the SNLS. Our results demonstrate that one can use wide-field fibre-fed multi-object spectrographs on 4m telescopes to efficiently obtain redshifts for large numbers of supernova host galaxies over the large areas of sky that will be covered by future high-redshift supernova surveys, such as the Dark...

  5. A top-down model to generate ensembles of runoff from a large number of hillslopes

    Directory of Open Access Journals (Sweden)

    P. R. Furey

    2013-09-01

    Full Text Available We hypothesize that total hillslope water loss for a rainfall–runoff event is inversely related to a function of a lognormal random variable, based on basin- and point-scale observations taken from the 21 km2 Goodwin Creek Experimental Watershed (GCEW in Mississippi, USA. A top-down approach is used to develop a new runoff generation model both to test our physical-statistical hypothesis and to provide a method of generating ensembles of runoff from a large number of hillslopes in a basin. The model is based on the assumption that the probability distributions of a runoff/loss ratio have a space–time rescaling property. We test this assumption using streamflow and rainfall data from GCEW. For over 100 rainfall–runoff events, we find that the spatial probability distributions of a runoff/loss ratio can be rescaled to a new distribution that is common to all events. We interpret random within-event differences in runoff/loss ratios in the model to arise from soil moisture spatial variability. Observations of water loss during events in GCEW support this interpretation. Our model preserves water balance in a mean statistical sense and supports our hypothesis. As an example, we use the model to generate ensembles of runoff at a large number of hillslopes for a rainfall–runoff event in GCEW.

  6. A top-down model to generate ensembles of runoff from a large number of hillslopes

    Science.gov (United States)

    Furey, P. R.; Gupta, V. K.; Troutman, B. M.

    2013-09-01

    We hypothesize that total hillslope water loss for a rainfall-runoff event is inversely related to a function of a lognormal random variable, based on basin- and point-scale observations taken from the 21 km2 Goodwin Creek Experimental Watershed (GCEW) in Mississippi, USA. A top-down approach is used to develop a new runoff generation model both to test our physical-statistical hypothesis and to provide a method of generating ensembles of runoff from a large number of hillslopes in a basin. The model is based on the assumption that the probability distributions of a runoff/loss ratio have a space-time rescaling property. We test this assumption using streamflow and rainfall data from GCEW. For over 100 rainfall-runoff events, we find that the spatial probability distributions of a runoff/loss ratio can be rescaled to a new distribution that is common to all events. We interpret random within-event differences in runoff/loss ratios in the model to arise from soil moisture spatial variability. Observations of water loss during events in GCEW support this interpretation. Our model preserves water balance in a mean statistical sense and supports our hypothesis. As an example, we use the model to generate ensembles of runoff at a large number of hillslopes for a rainfall-runoff event in GCEW.

  7. The Number Density Evolution of Extreme Emission Line Galaxies in 3D-HST: Results from a Novel Automated Line Search Technique for Slitless Spectroscopy

    CERN Document Server

    Maseda, Michael V; Rix, Hans-Walter; Momcheva, Ivelina; Brammer, Gabriel B; Franx, Marijn; Lundgren, Britt F; Skelton, Rosalind E; Whitaker, Katherine E

    2016-01-01

    The multiplexing capability of slitless spectroscopy is a powerful asset in creating large spectroscopic datasets, but issues such as spectral confusion make the interpretation of the data challenging. Here we present a new method to search for emission lines in the slitless spectroscopic data from the 3D-HST survey utilizing the Wide-Field Camera 3 on board the Hubble Space Telescope. Using a novel statistical technique, we can detect compact (extended) emission lines at 90% completeness down to fluxes of 1.5 (3.0) times 10^{-17} erg/s/cm^2, close to the noise level of the grism exposures, for objects detected in the deep ancillary photometric data. Unlike previous methods, the Bayesian nature allows for probabilistic line identifications, namely redshift estimates, based on secondary emission line detections and/or photometric redshift priors. As a first application, we measure the comoving number density of Extreme Emission Line Galaxies (restframe [O III] 5007 equivalent widths in excess of 500 Angstroms)...

  8. A photonic-crystal optical antenna for extremely large local-field enhancement.

    Science.gov (United States)

    Chang, Hyun-Joo; Kim, Se-Heon; Lee, Yong-Hee; Kartalov, Emil P; Scherer, Axel

    2010-11-08

    We propose a novel design of an all-dielectric optical antenna based on photonic-band-gap confinement. Specifically, we have engineered the photonic-crystal dipole mode to have broad spectral response (Q~70) and well-directed vertical-radiation by introducing a plane mirror below the cavity. Considerably large local electric-field intensity enhancement~4,500 is expected from the proposed design for a normally incident planewave. Furthermore, an analytic model developed based on coupled-mode theory predicts that the electric-field intensity enhancement can easily be over 100,000 by employing reasonably high-Q (~10,000) resonators.

  9. Methods to produce and safely work with large numbers of Toxoplasma gondii oocysts and bradyzoite cysts.

    Science.gov (United States)

    Fritz, H; Barr, B; Packham, A; Melli, A; Conrad, P A

    2012-01-01

    Two major obstacles to conducting studies with Toxoplasma gondii oocysts are the difficulty in reliably producing large numbers of this life stage and safety concerns because the oocyst is the most environmentally resistant stage of this zoonotic organism. Oocyst production requires oral infection of the definitive feline host with adequate numbers of T. gondii organisms to obtain unsporulated oocysts that are shed in the feces for 3-10 days after infection. Since the most successful and common mode of experimental infection of kittens with T. gondii is by ingestion of bradyzoite tissue cysts, the first step in successful oocyst production is to ensure a high bradyzoite tissue cyst burden in the brains of mice that can be used for the oral inoculum. We compared two methods for producing bradyzoite brain cysts in mice, by infecting them either orally or subcutaneously with oocysts. In both cases, oocysts derived from a low passage T. gondii Type II strain (M4) were used to infect eight-ten week-old Swiss Webster mice. First the number of bradyzoite cysts that were purified from infected mouse brains was compared. Then to evaluate the effect of the route of oocyst inoculation on tissue cyst distribution in mice, a second group of mice was infected with oocysts by one of each route and tissues were examined by histology. In separate experiments, brains from infected mice were used to infect kittens for oocyst production. Greater than 1.3 billion oocysts were isolated from the feces of two infected kittens in the first production and greater than 1.8 billion oocysts from three kittens in the second production. Our results demonstrate that oral delivery of oocysts to mice results in both higher cyst loads in the brain and greater cyst burdens in other tissues examined as compared to those of mice that received the same number of oocysts subcutaneously. The ultimate goal in producing large numbers of oocysts in kittens is to generate adequate amounts of starting material

  10. The power of sensitivity analysis and thoughts on models with large numbers of parameters

    Energy Technology Data Exchange (ETDEWEB)

    Havlacek, William [Los Alamos National Laboratory

    2008-01-01

    The regulatory systems that allow cells to adapt to their environments are exceedingly complex, and although we know a great deal about the intricate mechanistic details of many of these systems, our ability to make accurate predictions about their system-level behaviors is severely limited. We would like to make such predictions for a number of reasons. How can we reverse dysfunctional molecular changes of these systems that cause disease? More generally, how can we harness and direct cellular activities for beneficial purposes? Our ability to make accurate predictions about a system is also a measure ofour fundamental understanding of that system. As evidenced by our mastery of technological systems, a useful understanding ofa complex system can often be obtained through the development and analysis ofa mathematical model, but predictive modeling of cellular regulatory systems, which necessarily relies on quantitative experimentation, is still in its infancy. There is much that we need to learn before modeling for practical applications becomes routine. In particular, we need to address a number of issues surrounding the large number of parameters that are typically found in a model for a cellular regulatory system.

  11. Laws of large numbers and langevin approximations for stochastic neural field equations.

    Science.gov (United States)

    Riedler, Martin G; Buckwar, Evelyn

    2013-01-23

    In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson-Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model.Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20.

  12. Number Density Distribution of Small Particles around a Large Particle: Structural Analysis of a Colloidal Suspension.

    Science.gov (United States)

    Amano, Ken-Ichi; Iwaki, Mitsuhiro; Hashimoto, Kota; Fukami, Kazuhiro; Nishi, Naoya; Takahashi, Ohgi; Sakka, Tetsuo

    2016-10-11

    Some colloidal suspensions contain two types of particles-small and large particles-to improve the lubricating ability, light absorptivity, and so forth. Structural and chemical analyses of such colloidal suspensions are often performed to understand their properties. In a structural analysis study, the observation of the number density distribution of small particles around a large particle (gLS) is difficult because these particles are randomly moving within the colloidal suspension by Brownian motion. We obtain gLS using the data from a line optical tweezer (LOT) that can measure the potential of mean force between two large colloidal particles (ΦLL). We propose a theory that transforms ΦLL into gLS. The transform theory is explained in detail and tested. We demonstrate for the first time that LOT can be used for the structural analysis of a colloidal suspension. LOT combined with the transform theory will facilitate structural analyses of the colloidal suspensions, which is important for both understanding colloidal properties and developing colloidal products.

  13. Extremes of N vicious walkers for large N: application to the directed polymer and KPZ interfaces

    CERN Document Server

    Schehr, Gregory

    2012-01-01

    We compute the joint probability distribution function (jpdf) P_N(M, \\tau_M) of the maximum M and its position \\tau_M for N non-intersecting Brownian excursions, on the unit time interval, in the large N limit. For N \\to \\infty, this jpdf is peaked around M = \\sqrt{2N} and \\tau_M = 1/2, while the typical fluctuations behave for large N like M - \\sqrt{2N} \\propto s N^{-1/6} and \\tau_M - 1/2 \\propto w N^{-1/3} where s and w are correlated random variables. One obtains an explicit expression of the limiting jpdf P(s,w) in terms of the Tracy-Widom distribution for the Gaussian Orthogonal Ensemble (GOE) of Random Matrix Theory and the psi-function for the Hastings-McLeod solution to the Painlev\\'e II equation. Our result yields, up to a rescaling of the random variables s and w, an expression for the jpdf of the maximum and its position for the Airy_2 process minus a parabola. This latter describes the fluctuations in many different physical systems belonging to the Kardar-Parisi-Zhang (KPZ) universality class in ...

  14. Exoplanet Science with the European Extremely Large Telescope. The Case for Visible and Near-IR Spectroscopy at High Resolution

    CERN Document Server

    Udry, S; Bouchy, F; Cameron, A Collier; Henning, T; Mayor, M; Pepe, F; Piskunov, N; Pollacco, D; Queloz, D; Quirrenbach, A; Rauer, H; Rebolo, R; Santos, N C; Snellen, I; Zerbi, F

    2014-01-01

    Exoplanet science is booming. In 20 years our knowledge has expanded considerably, from the first discovery of a Hot Jupiter, to the detection of a large population of Neptunes and super-Earths, to the first steps toward the characterization of exoplanet atmospheres. Between today and 2025, the field will evolve at an even faster pace with the advent of several space-based transit search missions, ground-based spectrographs, high-contrast imaging facilities, and the James Webb Space Telescope. Especially the ESA M-class PLATO mission will be a game changer in the field. From 2024 onwards, PLATO will find transiting terrestrial planets orbiting within the habitable zones of nearby, bright stars. These objects will require the power of Extremely Large Telescopes (ELTs) to be characterized further. The technique of ground-based high-resolution spectroscopy is establishing itself as a crucial pathway to measure chemical composition, atmospheric structure and atmospheric circulation in transiting exoplanets. A hig...

  15. MRI induced second-degree burn in a patient with extremely large uterine leiomyomas: A case report

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chul Min; Kang, Bo Kyeong; Song, Soon Young; Koh, Byung Hee; Choi, Joong Sub; Lee, Won Moo [Hanyang University Medical Center, Hanyang University College of Medicine, Seoul (Korea, Republic of)

    2015-12-15

    Burns and thermal injuries related with magnetic resonance imaging (MRI) are rare. Previous literature indicates that medical devices with cable, cosmetics or tattoo are known as risk factors for burns and thermal injuries. However, there is no report of MRI-related burns in Korea. Herein, we reported a case of deep second degree burn after MRI in a 38-year-old female patient with multiple uterine leiomyomas including some that were large and degenerated. The large uterine leiomyoma-induced protruded anterior abdominal wall in direct contact with the body coil during MRI was suspected as the cause of injury, by retrospective analysis. Therefore, awareness of MRI related thermal injury is necessary to prevent this hazard, together with extreme care during MRI.

  16. The Vascularized Fibular Graft in the Pediatric Upper Extremity: A Durable, Biological Solution to Large Oncologic Defects

    Directory of Open Access Journals (Sweden)

    Nicki Zelenski

    2013-01-01

    Full Text Available Skeletal reconstruction after large tumor resection is challenging. The free vascularized fibular graft (FVFG offers the potential for rapid autograft incorporation as well as growing physeal transfer in pediatric patients. We retrospectively reviewed eleven pediatric patients treated with FVFG reconstructions of the upper extremity after tumor resection. Eight male and three female patients were identified, including four who underwent epiphyseal transfer. All eleven patients retained a functional salvaged limb. Nonunion and graft fracture were the most common complications relating to graft site (27%. Peroneal nerve palsy occurred in 4/11 patients, all of whom received epiphyseal transfer. Patients receiving epiphyseal transplant had a mean annual growth of 1.7 cm/year. Mean graft hypertrophy index increased by more than 10% in all cases. Although a high complication rate may be anticipated, the free vascularized fibula may be used to reconstruct large skeletal defects in the pediatric upper extremity after oncologic resection. Transferring the vascularized physis is a viable option when longitudinal growth is desired.

  17. The OVLA 1.5-m primary as a segment for an Extremely Large Telescope?

    Science.gov (United States)

    Arnold, L.; Lardière, O.; Dejonghe, J.

    The Optical Very Large Array (OVLA) 1.5 m prototype telescope is under construction at Observatoire de Haute Provence. This telescope features a thin active parabolic f/1.7 mirror, weighting 100 kg/m^2 with the active cell. The meniscus-shaped mirror, made of low-cost ordinary window glass, is 24.1 mm thick and supported by 32 actuators, each ensuring both axial and lateral supporting via a glued triple contact point under the mirror. The active optics system is briefly described, as well as the mirror thermal behaviour and how we plan to correct in situ the related deformations. We discuss the characteristics of this mirror concept (weight, low-cost, thermal behaviour, wind buffeting) of this mirror concept versus its application to ELT primary mirror active segments.

  18. An overview of techniques for dealing with large numbers of independent variables in epidemiologic studies.

    Science.gov (United States)

    Dohoo, I R; Ducrot, C; Fourichon, C; Donald, A; Hurnik, D

    1997-01-01

    Many studies of health and production problems in livestock involve the simultaneous evaluation of large numbers of risk factors. These analyses may be complicated by a number of problems including: multicollinearity (which arises because many of the risk factors may be related (correlated) to each other), confounding, interaction, problems related to sample size (and hence the power of the study), and the fact that many associations are evaluated from a single dataset. This paper focuses primarily on the problem of multicollinearity and discusses a number of techniques for dealing with this problem. However, some of the techniques discussed may also help to deal with the other problems identified above. The first general approach to dealing with multicollinearity involves reducing the number of independent variables prior to investigating associations with the disease. Techniques to accomplish this include: (1) excluding variables after screening for associations among independent variables; (2) creating indices or scores which combine data from multiple factors into a single variable; (3) creating a smaller set of independent variables through the use of multivariable techniques such as principal components analysis or factor analysis. The second general approach is to use appropriate steps and statistical techniques to investigate associations between the independent variables and the dependent variable. A preliminary screening of these associations may be performed using simple statistical tests. Subsequently, multivariable techniques such as linear or logistic regression or correspondence analysis can be used to identify important associations. The strengths and limitations of these techniques are discussed and the techniques are demonstrated using a dataset from a recent study of risk factors for pneumonia in swine. Emphasis is placed on comparing correspondence analysis with other techniques as it has been used less in the epidemiology literature.

  19. Reynolds number dependence of large-scale friction control in turbulent channel flow

    Science.gov (United States)

    Canton, Jacopo; Örlü, Ramis; Chin, Cheng; Schlatter, Philipp

    2016-12-01

    The present work investigates the effectiveness of the control strategy introduced by Schoppa and Hussain [Phys. Fluids 10, 1049 (1998), 10.1063/1.869789] as a function of Reynolds number (Re). The skin-friction drag reduction method proposed by these authors, consisting of streamwise-invariant, counter-rotating vortices, was analyzed by Canton et al. [Flow, Turbul. Combust. 97, 811 (2016), 10.1007/s10494-016-9723-8] in turbulent channel flows for friction Reynolds numbers (Reτ) corresponding to the value of the original study (i.e., 104) and 180. For these Re, a slightly modified version of the method proved to be successful and was capable of providing a drag reduction of up to 18%. The present study analyzes the Reynolds number dependence of this drag-reducing strategy by performing two sets of direct numerical simulations (DNS) for Reτ=360 and 550. A detailed analysis of the method as a function of the control parameters (amplitude and wavelength) and Re confirms, on the one hand, the effectiveness of the large-scale vortices at low Re and, on the other hand, the decreasing and finally vanishing effectiveness of this method for higher Re. In particular, no drag reduction can be achieved for Reτ=550 for any combination of the parameters controlling the vortices. For low Reynolds numbers, the large-scale vortices are able to affect the near-wall cycle and alter the wall-shear-stress distribution to cause an overall drag reduction effect, in accordance with most control strategies. For higher Re, instead, the present method fails to penetrate the near-wall region and cannot induce the spanwise velocity variation observed in other more established control strategies, which focus on the near-wall cycle. Despite the negative outcome, the present results demonstrate the shortcomings of the control strategy and show that future focus should be on methods that directly target the near-wall region or other suitable alternatives.

  20. Assessing the impact of climate and land use changes on extreme floods in a large tropical catchment

    Science.gov (United States)

    Jothityangkoon, Chatchai; Hirunteeyakul, Chow; Boonrawd, Kowit; Sivapalan, Murugesu

    2013-05-01

    In the wake of the recent catastrophic floods in Thailand, there is considerable concern about the safety of large dams designed and built some 50 years ago. In this paper a distributed rainfall-runoff model appropriate for extreme flood conditions is used to generate revised estimates of the Probable Maximum Flood (PMF) for the Upper Ping River catchment (area 26,386 km2) in northern Thailand, upstream of location of the large Bhumipol Dam. The model has two components: a continuous water balance model based on a configuration of parameters estimated from climate, soil and vegetation data and a distributed flood routing model based on non-linear storage-discharge relationships of the river network under extreme flood conditions. The model is implemented under several alternative scenarios regarding the Probable Maximum Precipitation (PMP) estimates and is also used to estimate the potential effects of both climate change and land use and land cover changes on the extreme floods. These new estimates are compared against estimates using other hydrological models, including the application of the original prediction methods under current conditions. Model simulations and sensitivity analyses indicate that a reasonable Probable Maximum Flood (PMF) at the dam site is 6311 m3/s, which is only slightly higher than the original design flood of 6000 m3/s. As part of an uncertainty assessment, the estimated PMF is sensitive to the design method, input PMP, land use changes and the floodplain inundation effect. The increase of PMP depth by 5% can cause a 7.5% increase in PMF. Deforestation by 10%, 20%, 30% can result in PMF increases of 3.1%, 6.2%, 9.2%, respectively. The modest increase of the estimated PMF (to just 6311 m3/s) in spite of these changes is due to the factoring of the hydraulic effects of trees and buildings on the floodplain as the flood situation changes from normal floods to extreme floods, when over-bank flows may be the dominant flooding process, leading

  1. Wind and Wave Extremes over the World Oceans From Very Large Forecast Ensembles

    CERN Document Server

    Breivik, Øyvind; Abdalla, Saleh; Bidlot, Jean-Raymond

    2013-01-01

    Global return value estimates of significant wave height and 10-m neutral wind speed are estimated from very large aggregations of archived ECMWF ensemble forecasts at +240-h lead time from the period 2003-2012. The upper percentiles are found to match ENVISAT wind speed better than ERA-Interim (ERA-I), which tends to be biased low. The return estimates are significantly higher for both wind speed and wave height in the extratropics and the subtropics than what is found from ERA-I, but lower than what is reported by Caires and Sterl (2005) and Vinoth and Young (2011). The highest discrepancies between ERA-I and ENS240 are found in the hurricane-prone areas, suggesting that the ensemble comes closer than ERA-I in capturing the intensity of tropical cyclones. The width of the confidence intervals are typically reduced by 70% due to the size of the data sets. Finally, non-parametric estimates of return values were computed from the tail of the distribution. These direct return estimates compare very well with Ge...

  2. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    Science.gov (United States)

    Ye, Peng-Cheng; Pan, Guang

    2015-06-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. Project supported by the National Natural Science Foundation of China (Grant Nos. 51279165 and 51479170) and the National Defense Basic Scientific Research Program of China (Grant No. B2720133014).

  3. Construction of Supersaturated Design with Large Number of Factors by the Complementary Design Method

    Institute of Scientific and Technical Information of China (English)

    Yan LIU; Min-Qian LIU

    2013-01-01

    Supersaturated designs (SSDs) have been widely used in factor screening experiments.The present paper aims to prove that the maximal balanced designs are a kind of special optimal SSDs under the E(fNOD)criterion.We also propose a new method,called the complementary design method,for constructing E(fNoD)optimal SSDs.The basic principle of this method is that for any existing E(fNOD) optimal SSD whose E(fNoD)value reaches its lower bound,its complementary design in the corresponding maximal balanced design is also E(fNOD) optimal.This method applies to both symmetrical and asymmetrical (mixed-level) cases.It provides a convenient and efficient way to construct many new designs with relatively large numbers of factors.Some newly constructed designs are given as examples.

  4. BLOCK CODING SCHEME FOR REDUCING PAPR IN OFDM SYSTEMS WITH LARGE NUMBER OF SUBCARRIERS

    Institute of Scientific and Technical Information of China (English)

    Jiang Tao; Zhu Guangxi; Zheng Jianbin

    2004-01-01

    The major drawback in Orthogonal Frequency Division Multiplexing (OFDM) system is due to the high Peak-to-Average Power Ratio (PAPR), so the performance of the system is significantly degraded by the nonlinearity of a High Power Amplifier (HPA) in the transmitter.In order to mitigate distortion, a block coding scheme for reducing PAPR in OFDM systems with large number of subcarriers based on complementary sequences and predistortion is proposed,which is capable of both error correction and PAPR reduction. Computer simulation results show that the proposed scheme significantly improves Bit Error Rate(BER) performance as compared to an uncoded system when an HPA is employed or a coded system without predistortion.

  5. Large Number, Dark Matter, Dark Energy, and Superstructures in the Universe

    Institute of Scientific and Technical Information of China (English)

    HUANG Wu-Liang; HUANG Xiao-Dong

    2009-01-01

    Since there may exist dark matter particles v and 5 with mass ~ 10-1 eV in the universe, the superstructures with a scale of 1019 solar masses (large number A ~ 1019) appeared during the era near and before the hydrogen recombination. Since there are superstructures in the universe, there may be no necessity for the existence of dark energy. For checking the superstructure in the universe by CMB anisotropy, we need to measure CMB angular power spectrum -especially around ten degrees across the sky - in more details. While neutrino v is related to electroweak unification, the fourth stable elementary particle δ may be related to strong-gravity unification, which suggests p + p →, n + δ and that some new baryons appeared in the TeV region.

  6. A Refinement of the Kolmogorov-Marcinkiewicz-Zygmund Strong Law of Large Numbers

    CERN Document Server

    Li, Deli; Rosalsky, Andrew

    2010-01-01

    For the partial sums formed from a sequence of i.i.d. random variables having a finite absolute p'th moment for some p in (0,2), we extend the recent and striking discovery of Hechner and Heinkel (Journal of Theoretical Probability (2010)) concerning "complete moment convergence" to the two cases 0large numbers. Versions of the above results in a Banach space setting are also presented.

  7. Inclined layer convection in a colloidal suspension with negative Soret coefficient at large solutal Rayleigh numbers.

    Science.gov (United States)

    Italia, Matteo; Croccolo, Fabrizio; Scheffold, Frank; Vailati, Alberto

    2014-10-01

    Convection in an inclined layer of fluid is affected by the presence of a component of the acceleration of gravity perpendicular to the density gradient that drives the convective motion. In this work we investigate the solutal convection of a colloidal suspension characterized by a negative Soret coefficient. Convection is induced by heating the suspension from above, and at large solutal Rayleigh numbers (of the order of 10(7)-10(8)) convective spoke patterns form. We show that in the presence of a marginal inclination of the cell as small as 19 mrad the isotropy of the spoke pattern is broken and the convective patterns tend to align in the direction of the inclination. At intermediate inclinations of the order of 33 mrad ordered square patterns are obtained, while at inclination of the order of 67 mrad the strong shear flow determined by the inclination gives rise to ascending and descending sheets of fluid aligned parallel to the direction of inclination.

  8. GPU-Based Parallel Integration of Large Numbers of Independent ODE Systems

    CERN Document Server

    Niemeyer, Kyle E

    2016-01-01

    The task of integrating a large number of independent ODE systems arises in various scientific and engineering areas. For nonstiff systems, common explicit integration algorithms can be used on GPUs, where individual GPU threads concurrently integrate independent ODEs with different initial conditions or parameters. One example is the fifth-order adaptive Runge-Kutta-Cash-Karp (RKCK) algorithm. In the case of stiff ODEs, standard explicit algorithms require impractically small time-step sizes for stability reasons, and implicit algorithms are therefore commonly used instead to allow larger time steps and reduce the computational expense. However, typical high-order implicit algorithms based on backwards differentiation formulae (e.g., VODE, LSODE) involve complex logical flow that causes severe thread divergence when implemented on GPUs, limiting the performance. Therefore, alternate algorithms are needed. A GPU-based Runge-Kutta-Chebyshev (RKC) algorithm can handle moderate levels of stiffness and performs s...

  9. Measured Variation in performance of handheld antennas for a large number of test persons

    DEFF Research Database (Denmark)

    Pedersen, Gert Frølund; Nielsen, Jesper Ødum; Olesen, Kim;

    This work investigates the variation in the mean effective gain (MEG) for a large number of test persons in order to find how much the difference in anatomy and persons who wear glasses, etc., changes the MEG (i.e., the received signal power with respect to a reference). The evaluation was carrie......” and a person present is on the average 3 dB for a directive patch antenna, 6 dB for a whip antenna and 10 dB for a helical antenna...... out in a typical GSM-1800 urban micro cell with the base station located outdoor approximately 700 m from the mobile. The mobile was located in office like environments. Peak variations in the MEG among different persons of more than 10 dB were found and the difference between “no person present...

  10. Steady imperfect bifurcation with generic 3D bluff bodies at large Reynolds numbers

    Science.gov (United States)

    Cadot, Olivier; Pastur, Luc; Evrard, Antoine; Soyer, Guillaume

    2014-11-01

    The turbulent wake of parallelepiped bodies exhibits a strong bi-modal behavior. The wake randomly undergoes symmetry breaking reversals, between two mirror asymmetric steady modes (RSB modes). The characteristic time for reversals is about 2 or three orders of magnitudes larger than the natural time for vortex shedding. Such a dynamics has been recently observed on real car which points out its importance about industrial applications. Both the viscosity and the proximity of a wall in the vicinity of the parallelepiped body (similarly to the road with a car model), stabilize the RSB modes on a single symmetric mode. It is shown that these stabilizations occur through imperfect fork bifurcations at large Reynolds numbers. The extra drag due to the presence of the RSB modes is evidenced.

  11. Expected z>5 QSO number counts in large area deep near-infrared surveys

    CERN Document Server

    Fontanot, Fabio; Jester, Sebastian

    2007-01-01

    The QSO luminosity function at z>5 provides strong constraints on models of joint evolution of QSO and their hosts. However, these observations are challenging because the low space densities of these objects necessitate surveying of large areas, in order to obtain statistically meaningful samples, while at the same time cosmological redshifting and dimming means that rather deep Near Infrared (NIR) imaging must be carried out. Several upcoming and proposed facilities with wide-field NIR imaging capabilities will open up this new region of parameter space. In this paper we present predictions for the expected number counts of z>5 QSOs, based on simple empirical models of QSO evolution, as a function of redshift, depth and surveyed area. We compute the evolution of observed-frame QSO magnitudes and colors in a representative photometric system covering the wavelength range 550nm < \\lambda < 1800nm, and combine this information with different estimates for the evolution of the QSO luminosity function. We ...

  12. PAPR Reduction in OFDM Systems with Large Number of Sub-Carriers by Carrier Interferometry Approaches

    Institute of Scientific and Technical Information of China (English)

    HE Jian-hui; QUAN Zi-yi; MEN Ai-dong

    2004-01-01

    High Peak-to-Average Power Ratio (PAPR) is one of the major drawbacks of Orthogonal Frequency Division Multiplexing ( OFDM) systems. This paper presents the structures of the particular bit sequences leading to the maximum PAPR (PAPRmax) in Carrier-Interferometry OFDM (CI/OFDM) and Pseudo Orthogonal Carrier-Interferometry OFDM (PO-CI/OFDM) systems for Binary Phase Shift Keying (BPSK) modulation. Furthermore, the simulation and analysis of PAPRmax and PAPR cumulative distribution in CI/OFDM and PO-CI/OFDM systems with 2048 sub-carriers are presented in this paper. The results show that the PAPR of OFDM system with large number of sub-carriers reduced evidently via CI approaches.

  13. Volume changes of extremely large and giant intracranial aneurysms after treatment with flow diverter stents

    Energy Technology Data Exchange (ETDEWEB)

    Carneiro, Angelo; Byrne, James V. [ohn Radcliffe Hospital, Oxford Neurovascular and Neuroradiology Research Unit, Nuffield Department of Surgical Sciences, Oxford (United Kingdom); Rane, Neil; Kueker, Wilhelm; Cellerini, Martino; Corkill, Rufus [John Radcliffe Hospital, Department of Neuroradiology, Oxford (United Kingdom)

    2014-01-15

    This study assessed volume changes of unruptured large and giant aneurysms (greatest diameter >20 mm) after treatment with flow diverter (FD) stents. Clinical audit of the cases treated in a single institution, over a 5-year period. Demographic and clinical data were retrospectively collected from the hospital records. Aneurysm volumes were measured by manual outlining at sequential slices using computerised tomography (CT) or magnetic resonance (MR) angiography data. The audit included eight patients (seven females) with eight aneurysms. Four aneurysms involved the cavernous segment of the internal carotid artery (ICA), three the supraclinoid ICA and one the basilar artery. Seven patients presented with signs and symptoms of mass effect and one with seizures. All but one aneurysm was treated with a single FD stent; six aneurysms were also coiled (either before or simultaneously with FD placement). Minimum follow-up time was 6 months (mean 20 months). At follow-up, three aneurysms decreased in size, three were unchanged and two increased. Both aneurysms that increased in size showed persistent endosaccular flow at follow-up MR; in one case, failure was attributed to suboptimal position of the stent; in the other case, it was attributed to persistence of a side branch originating from the aneurysm (similar to the endoleak phenomenon of aortic aneurysms). At follow-up, five aneurysms were completely occluded; none of these increased in volume. Complete occlusion of the aneurysms leads, in most cases, to its shrinkage. In cases of late aneurysm growth or regrowth, consideration should be given to possible endoleak as the cause. (orig.)

  14. Cosmonumerology, Cosmophysics, and the Large Numbers Hypothesis: British Cosmology in the 1930s

    Science.gov (United States)

    Durham, Ian

    2001-04-01

    A number of unorthodox cosmological models were developed in the 1930s, many by British theoreticians. Three of the most notable of these theories included Eddington's cosmonumerology, Milne's cosmophysics, and Dirac's large numbers hypothesis (LNH). Dirac's LNH was based partly on the other two and it has been argued that modern steady-state theories are based partly on Milne's cosmophysics. But what influenced Eddington and Milne? Both were products of the late Victorian education system in Britain and could conceivably have been influenced by Victorian thought which, in addition to its strict (though technically unoffical) social caste system, had a flair for the unusual. Victorianism was filled with a fascination for the occult and the supernatural, and science was not insulated from this trend (witness the Henry Slade trial in 1877). It is conceivable that the normally strict mentality of the scientific process in the minds of Eddington and Milne was affected, indirectly, by this trend for the unusual, possibly pushing them into thinking "outside the box" as it were. In addition, cosmonumerology and the LNH exhibit signs of Pythagorean and Aristotelian thought. It is the aim of this ongoing project at St. Andrews to determine the influences and characterize the relations existing in and within these and related theories.

  15. A historical law of large numbers for the Marcus Lushnikov process

    CERN Document Server

    Jacquot, Stéphanie

    2009-01-01

    The Marcus-Lushnikov process is a finite stochastic particle system, in which each particle is entirely characterized by its mass. Each pair of particles with masses $x$ and $y$ merges into a single particle at a given rate $K(x,y)$. Under certain assumptions, this process converges to the solution to Smoluchowski equation, as the number of particles increases to infinity. The Marcus-Lushnikov process gives at each time the distribution of masses of the particles present in the system, but does not retain the history of formation of the particles. In this paper, we set up a historical analogue of the Marcus-Lushnikov process (built according the rules of construction of the usual Markov-Lushnikov process) each time giving what we call the historical tree of a particle. The historical tree of a particle present in the Marcus-Lushnikov process at a given time $t$ encodes information about the times and masses of the coagulation events that have formed that particle. We prove a law of large numbers for the empir...

  16. Conflict of interest policies for organizations producing a large number of clinical practice guidelines.

    Science.gov (United States)

    Norris, Susan L; Holmer, Haley K; Burda, Brittany U; Ogden, Lauren A; Fu, Rongwei

    2012-01-01

    Conflict of interest (COI) of clinical practice guideline (CPG) sponsors and authors is an important potential source of bias in CPG development. The objectives of this study were to describe the COI policies for organizations currently producing a significant number of CPGs, and to determine if these policies meet 2011 Institute of Medicine (IOM) standards. We identified organizations with five or more guidelines listed in the National Guideline Clearinghouse between January 1, 2009 and November 5, 2010. We obtained the COI policy for each organization from publicly accessible sources, most often the organization's website, and compared those polices to IOM standards related to COI. 37 organizations fulfilled our inclusion criteria, of which 17 (46%) had a COI policy directly related to CPGs. These COI policies varied widely with respect to types of COI addressed, from whom disclosures were collected, monetary thresholds for disclosure, approaches to management, and updating requirements. Not one organization's policy adhered to all seven of the IOM standards that were examined, and nine organizations did not meet a single one of the standards. COI policies among organizations producing a large number of CPGs currently do not measure up to IOM standards related to COI disclosure and management. CPG developers need to make significant improvements in these policies and their implementation in order to optimize the quality and credibility of their guidelines.

  17. Conflict of interest policies for organizations producing a large number of clinical practice guidelines.

    Directory of Open Access Journals (Sweden)

    Susan L Norris

    Full Text Available BACKGROUND: Conflict of interest (COI of clinical practice guideline (CPG sponsors and authors is an important potential source of bias in CPG development. The objectives of this study were to describe the COI policies for organizations currently producing a significant number of CPGs, and to determine if these policies meet 2011 Institute of Medicine (IOM standards. METHODOLOGY/PRINCIPAL FINDINGS: We identified organizations with five or more guidelines listed in the National Guideline Clearinghouse between January 1, 2009 and November 5, 2010. We obtained the COI policy for each organization from publicly accessible sources, most often the organization's website, and compared those polices to IOM standards related to COI. 37 organizations fulfilled our inclusion criteria, of which 17 (46% had a COI policy directly related to CPGs. These COI policies varied widely with respect to types of COI addressed, from whom disclosures were collected, monetary thresholds for disclosure, approaches to management, and updating requirements. Not one organization's policy adhered to all seven of the IOM standards that were examined, and nine organizations did not meet a single one of the standards. CONCLUSIONS/SIGNIFICANCE: COI policies among organizations producing a large number of CPGs currently do not measure up to IOM standards related to COI disclosure and management. CPG developers need to make significant improvements in these policies and their implementation in order to optimize the quality and credibility of their guidelines.

  18. A Clustering Algorithm for Planning the Integration Process of a Large Number of Conceptual Schemas

    Institute of Scientific and Technical Information of China (English)

    Carlo Batini; Paola Bonizzoni; Marco Comerio; Riccardo Dondi; Yuri Pirola; Francesco Salandra

    2015-01-01

    When tens and even hundreds of schemas are involved in the integration process, criteria are needed for choosing clusters of schemas to be integrated, so as to deal with the integration problem through an efficient iterative process. Schemas in clusters should be chosen according to cohesion and coupling criteria that are based on similarities and dissimilarities among schemas. In this paper, we propose an algorithm for a novel variant of the correlation clustering approach that addresses the problem of assisting a designer in integrating a large number of conceptual schemas. The novel variant introduces upper and lower bounds to the number of schemas in each cluster, in order to avoid too complex and too simple integration contexts respectively. We give a heuristic for solving the problem, being an NP hard combinatorial problem. An experimental activity demonstrates an appreciable increment in the effectiveness of the schema integration process when clusters are computed by means of the proposed algorithm w.r.t. the ones manually defined by an expert.

  19. Large eddy simulation of the FDA benchmark nozzle for a Reynolds number of 6500.

    Science.gov (United States)

    Janiga, Gábor

    2014-04-01

    This work investigates the flow in a benchmark nozzle model of an idealized medical device proposed by the FDA using computational fluid dynamics (CFD). It was in particular shown that a proper modeling of the transitional flow features is particularly challenging, leading to large discrepancies and inaccurate predictions from the different research groups using Reynolds-averaged Navier-Stokes (RANS) modeling. In spite of the relatively simple, axisymmetric computational geometry, the resulting turbulent flow is fairly complex and non-axisymmetric, in particular due to the sudden expansion. The resulting flow cannot be well predicted with simple modeling approaches. Due to the varying diameters and flow velocities encountered in the nozzle, different typical flow regions and regimes can be distinguished, from laminar to transitional and to weakly turbulent. The purpose of the present work is to re-examine the FDA-CFD benchmark nozzle model at a Reynolds number of 6500 using large eddy simulation (LES). The LES results are compared with published experimental data obtained by Particle Image Velocimetry (PIV) and an excellent agreement can be observed considering the temporally averaged flow velocities. Different flow regimes are characterized by computing the temporal energy spectra at different locations along the main axis.

  20. LARGE AERODYNAMIC FORCES ON A SWEEPING WING AT LOW REYNOLDS NUMBER

    Institute of Scientific and Technical Information of China (English)

    SUN Mao; WU Jianghao

    2004-01-01

    The aerodynamic forces and flow structure of a model insect wing is studied by solving the Navier-Stokes equations numerically. After an initial start from rest, the wing is made to execute an azimuthal rotation (sweeping) at a large angle of attack and constant angular velocity. The Reynolds number (Re) considered in the present note is 480 (Re is based on the mean chord length of the wing and the speed at 60% wing length from the wing root). During the constant-speed sweeping motion, the stall is absent and large and approximately constant lift and drag coefficients can be maintained. The mechanism for the absence of the stall or the maintenance of large aerodynamic force coefficients is as follows. Soon after the initial start, a vortex ring, which consists of the leading-edge vortex (LEV), the starting vortex, and the two wing-tip vortices, is formed in the wake of the wing. During the subsequent motion of the wing, a base-to-tip spanwise flow converts the vorticity in the LEV to the wing tip and the LEV keeps an approximately constant strength. This prevents the LEV from shedding. As a result,the size of the vortex ring increases approximately linearly with time, resulting in an approximately constant time rate of the first moment of vorticity, or approximately constant lift and drag coefficients.The variation of the relative velocity along the wing span causes a pressure gradient along the wingspan. The base-to-tip spanwise flow is mainly maintained by the pressure-gradient force.

  1. Spitzer SAGE-Spec: Near Infrared Spectroscopy, Dust Shells, and Cool Envelopes in Extreme Large Magellanic Cloud AGB Stars

    CERN Document Server

    Blum, R D; Kemper, F; Ling, B; Volk, K

    2014-01-01

    K-band spectra are presented for a sample of 39 Spitzer IRS SAGE-Spec sources in the Large Magellanic Cloud. The spectra exhibit characteristics in very good agreement with their positions in the near infrared - Spitzer color-magnitude diagrams and their properties as deduced from the Spitzer IRS spectra. Specifically, the near infrared spectra show strong atomic and molecular features representative of oxygen-rich and carbon-rich asymptotic giant branch stars, respectively. A small subset of stars were chosen from the luminous and red extreme "tip" of the color magnitude diagram. These objects have properties consistent with dusty envelopes but also cool, carbon-rich "stellar" cores. Modest amounts of dust mass loss combine with the stellar spectral energy distribution to make these objects appear extreme in their near infrared and mid infrared colors. One object in our sample, HV 915, a known post asymptotic giant branch star of the RV Tau type exhibits CO 2.3 micron band head emission consistent with previ...

  2. Porous medium convection at large Rayleigh number: Studies of coherent structure, transport, and reduced dynamics

    Science.gov (United States)

    Wen, Baole

    Buoyancy-driven convection in fluid-saturated porous media is a key environmental and technological process, with applications ranging from carbon dioxide storage in terrestrial aquifers to the design of compact heat exchangers. Porous medium convection is also a paradigm for forced-dissipative infinite-dimensional dynamical systems, exhibiting spatiotemporally chaotic dynamics if not "true" turbulence. The objective of this dissertation research is to quantitatively characterize the dynamics and heat transport in two-dimensional horizontal and inclined porous medium convection between isothermal plane parallel boundaries at asymptotically large values of the Rayleigh number Ra by investigating the emergent, quasi-coherent flow. This investigation employs a complement of direct numerical simulations (DNS), secondary stability and dynamical systems theory, and variational analysis. The DNS confirm the remarkable tendency for the interior flow to self-organize into closely-spaced columnar plumes at sufficiently large Ra (up to Ra ≃ 105), with more complex spatiotemporal features being confined to boundary layers near the heated and cooled walls. The relatively simple form of the interior flow motivates investigation of unstable steady and time-periodic convective states at large Ra as a function of the domain aspect ratio L. To gain insight into the development of spatiotemporally chaotic convection, the (secondary) stability of these fully nonlinear states to small-amplitude disturbances is investigated using a spatial Floquet analysis. The results indicate that there exist two distinct modes of instability at large Ra: a bulk instability mode and a wall instability mode. The former usually is excited by long-wavelength disturbances and is generally much weaker than the latter. DNS, strategically initialized to investigate the fully nonlinear evolution of the most dangerous secondary instability modes, suggest that the (long time) mean inter-plume spacing in

  3. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  4. Improving attosecond pulse reflection by large angle incidence for a periodic multilayer mirror in the extreme ultraviolet region

    Institute of Scientific and Technical Information of China (English)

    Lin Cheng-You; Chen Shu-Jing; Liu Da-He

    2013-01-01

    The improvement of attosecond pulse reflection by large angle incidence for a periodic multilayer mirror in the extreme ultraviolet region has been discussed.Numerical simulations of both spectral and temporal reflection characteristics of periodic multilayer mirrors under various incident angles have been analyzed and compared.It was found that the periodic multilayer mirror under a larger incidence angle can provide not only higher integrated reflectivity but also a broader reflection band with negligible dispersion,making it possible to obtain better a reflected pulse that has a higher pulse reflection efficiency and shorter pulse duration for attosecond pulse reflection.In addition,by increasing the incident angle,the promotion of attosecond pulse reflection capability has been proven for periodic multilayer mirrors with arbitrary layers.

  5. Historical changes in the annual number of large floods in North America and Europe

    Science.gov (United States)

    Hodgkins, G. A.; Whitfield, P. H.; Hannaford, J.; Burn, D. H.; Renard, B.; Stahl, K.; Fleig, A.; Madsen, H.; Mediero, L.; Korhonen, J.; Murphy, C.; Crochet, P.; Wilson, D.

    2013-12-01

    Many studies have analyzed historical changes in low magnitude floods, such as the annual peak flow, at a national or regional scale. However, the river basins used have often been influenced by human alterations such as reservoir regulation or urbanization. No known studies have analyzed changes in large floods (greater than 25-year return period) at a continental scale for minimally impacted basins. To fill this research gap, this study analyzed flood flows from reference hydrologic networks (RHNs) or RHN-like gauges in North America (United States and Canada) and Europe (United Kingdom, Ireland, France, Spain, Germany, Switzerland, Austria, Iceland, Norway, Denmark, Sweden, and Finland). RHNs are formally defined networks in several countries that comprise gauging stations with a natural or near-natural flow regime and provide good quality data. Selected RHN-like gauges were included following a major effort to ensure RHN-like status through consultation with local experts. Peak flows with recurrence intervals of 25, 50, and 100 years were estimated using consistent methods for over 1200 study gauges, and peak flows at each gauge that exceeded these flood thresholds in the last 40-100 years were compiled. Continental and regional trends over time in the annual number of large floods, with regions differentiated by type of hydrological regime (pluvial, nival, mixed), are being computed and will be presented at AGU. The unique dataset used for this study is an example of successful international collaboration on hydro-climatic data exchange, which is potentially a step towards establishing RHN or RHN-like networks on a global scale. Analysis of flows from such networks would make a valuable contribution to the understanding of historical global hydrological change and would help inform expected future hydrologic changes.

  6. Wall-modeled large-eddy simulation of transonic airfoil buffet at high Reynolds number

    Science.gov (United States)

    Fukushima, Yuma; Kawai, Soshi

    2016-11-01

    In this study, we conduct the wall-modeled large-eddy simulation (LES) of transonic buffet phenomena over the OAT15A supercritical airfoil at high Reynolds number. The transonic airfoil buffet involves shock-turbulent boundary layer interactions and shock vibration associated with the flow separation downstream of the shock wave. The wall-modeled LES developed by Kawai and Larsson PoF (2012) is tuned on the K supercomputer for high-fidelity simulation. We first show the capability of the present wall-modeled LES on the transonic airfoil buffet phenomena and then investigate the detailed flow physics of unsteadiness of shock waves and separated boundary layer interaction phenomena. We also focus on the sustaining mechanism of the buffet phenomena, including the source of the pressure waves propagated from the trailing edge and the interactions between the shock wave and the generated sound waves. This work was supported in part by MEXT as a social and scientific priority issue to be tackled by using post-K computer. Computer resources of the K computer was provided by the RIKEN Advanced Institute for Computational Science (Project ID: hp150254).

  7. Indoor localization based on cellular telephony RSSI fingerprints containing very large numbers of carriers

    Directory of Open Access Journals (Sweden)

    Oussar Yacine

    2011-01-01

    Full Text Available Abstract A new approach to indoor localization is presented, based upon the use of Received Signal Strength (RSS fingerprints containing data from very large numbers of cellular base stations--up to the entire GSM band of over 500 channels. Machine learning techniques are employed to extract good quality location information from these high-dimensionality input vectors. Experimental results in a domestic and an office setting are presented, in which data were accumulated over a 1-month period in order to assure time robustness. Room-level classification efficiencies approaching 100% were obtained, using Support Vector Machines in one-versus-one and one-versus-all configurations. Promising results using semi-supervised learning techniques, in which only a fraction of the training data is required to have a room label, are also presented. While indoor RSS localization using WiFi, as well as some rather mediocre results with low-carrier count GSM fingerprints, have been discussed elsewhere, this is to our knowledge the first study to demonstrate that good quality indoor localization information can be obtained, in diverse settings, by applying a machine learning strategy to RSS vectors that contain the entire GSM band.

  8. Large-Scale Fluctuations in the Number Density of Galaxies in Independent Surveys of Deep Fields

    CERN Document Server

    Shirokov, S I; Baryshev, Yu V; Gorokhov, V L

    2016-01-01

    New arguments supporting the reality of large-scale fluctuations in the density of the visible matter in deep galaxy surveys are presented. A statistical analysis of the radial distributions of galaxies in the COSMOS and HDF-N deep fields is presented. Independent spectral and photometric surveys exist for each field, carried out in different wavelength ranges and using different observing methods. Catalogs of photometric redshifts in the optical (COSMOS-Zphot) and infrared (UltraVISTA) were used for the COSMOS field in the redshift interval $0.1 < z < 3.5$, as well as the zCOSMOS (10kZ) spectroscopic survey and the XMM-COSMOS and ALHAMBRA-F4 photometric redshift surveys. The HDFN-Zphot and ALHAMBRA-F5 catalogs of photometric redshifts were used for the HDF-N field. The Pearson correlation coefficient for the fluctuations in the numbers of galaxies obtained for independent surveys of the same deep field reaches $R = 0.70 \\pm 0.16$. The presence of this positive correlation supports the reality of fluctu...

  9. Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers

    Science.gov (United States)

    Kawai, Soshi; Larsson, Johan

    2013-01-01

    A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.

  10. Laboratory astrophysics using differential rotation of unmagnetized plasma at large magnetic Reynolds number

    Science.gov (United States)

    Weisberg, David

    2016-10-01

    Differentially rotating plasma flow has been measured in the Madison Plasma Dynamo Experiment (MPDX). Spherical cusp-confined plasmas have been stirred both from the plasma boundary using electrostatic stirring in the magnetized edge and in the plasma core using weak global fields and cross-field currents to impose a body-force torque. Laminar velocity profiles conducive to shear-driven MHD instabilities like the dynamo and the MRI are now being generated and controlled with magnetic Reynolds numbers of Rm method for plasma heating, but limits on input heating power have been observed (believed to be caused by the formation of double-layers at anodes). These confinement studies have culminated in large (R = 1.4 m), warm (Te 1), steady-state plasmas. Results of the ambipolar transport model are good fits to measurements of pressure gradients and fluid drifts in the cusp, and offer a predictive tool for future cusp-confined devices. Hydrodynamic modeling is shown to be a good description for measured plasma flows, where ion viscosity proves to be an efficient mechanism for transporting momentum from the magnetized edge into the unmagnetized core. In addition, the body-force stirring technique produces velocity profiles conducive to MRI experiments where dΩ / dr research of flow-driven astrophysical MHD instabilities.

  11. Left-Right Symmetry and Lepton Number Violation at the Large Hadron Electron Collider

    CERN Document Server

    Lindner, Manfred; Rodejohann, Werner; Yaguna, Carlos E

    2016-01-01

    We show that the proposed Large Hadron electron Collider (LHeC) will provide a great opportunity to search for left-right symmetry and establish lepton number violation, complementing current and planned searches based on LHC data and neutrinoless double beta decay. We consider several plausible configurations for the LHeC -- including different electron energies and polarizations, as well as distinct values for the charge misidentification rate. Within left-right symmetric theories we determine the values of right-handed neutrino and gauge boson masses that could be tested at the LHeC after one, five and ten years of operation. Our results indicate that this collider might probe, via the $\\Delta L =2$ signal $e^-p\\to e^+jjj$, Majorana neutrino masses up to $1$ TeV and $W_R$ masses up to $\\sim 6.5$ TeV. Interestingly, part of this parameter space is beyond the expected reach of the LHC and of future neutrinoless double beta decay experiments.

  12. LARGE-EDDY SIMULATION OF FLOW AROUND CYLINDER ARRAYS AT A SUBCRITICAL REYNOLDS NUMBER

    Institute of Scientific and Technical Information of China (English)

    ZOU Lin; LIN Yu-feng; LAM Kit

    2008-01-01

    The complex three-dimensional turbulent flows around a cylinder array with four cylinders in an in-line square configuration at a subcritical Reynolds number of 1.5 ×104 with the spacing ratio at and 3.5 were investigated using the Large Eddy Simulation (LES). The full field vorticity and velocity distributions as well as turbulent quantities were calculated in detail and the near wake structures were presented. The results show that the bi-stable flow nature was observed at and distinct vortex shedding of the upstream cylinders occurred at at . The techniques of Laser Doppler Anemometry (LDA) and Digital Particle Image Velocimetry (DPIV) are also employed to validate the present LES method. The results show that the numerical predictions are in excellent agreement with the experimental measurements. Therefore, the full field instantaneous and mean quantities of the flow field, velocity field and vorticity field can be extracted from the LES results for further study of the complex flow characteristics.

  13. Left-right symmetry and lepton number violation at the Large Hadron electron Collider

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, Manfred; Queiroz, Farinaldo S.; Rodejohann, Werner; Yaguna, Carlos E. [Max-Planck-Institut für Kernphysik,Saupfercheckweg 1, 69117 Heidelberg (Germany)

    2016-06-23

    We show that the proposed Large Hadron electron Collider (LHeC) will provide an opportunity to search for left-right symmetry and establish lepton number violation, complementing current and planned searches based on LHC data and neutrinoless double beta decay. We consider several plausible configurations for the LHeC — including different electron energies and polarizations, as well as distinct values for the charge misidentification rate. Within left-right symmetric theories we determine the values of right-handed neutrino and gauge boson masses that could be tested at the LHeC after one, five and ten years of operation. Our results indicate that this collider might probe, via the ΔL=2 signal e{sup −}p→e{sup +}jjj, Majorana neutrino masses up to 1 TeV and W{sub R} masses up to ∼6.5 TeV. Interestingly, part of this parameter space is beyond the expected reach of the LHC and of future neutrinoless double beta decay experiments.

  14. Science Case and Requirements for the MOSAIC Concept for a Multi-Object Spectrograph for the European Extremely Large Telescope

    CERN Document Server

    Evans, C J; Barbuy, B; Bonifacio, P; Cuby, J -G; Guenther, E; Hammer, F; Jagourel, P; Kaper, L; Morris, S L; Afonso, J; Amram, P; Aussel, H; Basden, A; Bastian, N; Battaglia, G; Biller, B; Bouché, N; Caffau, E; Charlot, S; Clenet, Y; Combes, F; Conselice, C; Contini, T; Dalton, G; Davies, B; Disseau, K; Dunlop, J; Fiore, F; Flores, H; Fusco, T; Gadotti, D; Gallazzi, A; Giallongo, E; Gonçalves, T; Gratadour, D; Hill, V; Huertas-Company, M; Ibata, R; Larsen, S; Fèvre, O Le; Lemasle, B; Maraston, C; Mei, S; Mellier, Y; Östlin, G; Paumard, T; Pello, R; Pentericci, L; Petitjean, P; Roth, M; Rouan, D; Schaerer, D; Telles, E; Trager, S; Welikala, N; Zibetti, S; Ziegler, B

    2014-01-01

    Over the past 18 months we have revisited the science requirements for a multi-object spectrograph (MOS) for the European Extremely Large Telescope (E-ELT). These efforts span the full range of E-ELT science and include input from a broad cross-section of astronomers across the ESO partner countries. In this contribution we summarise the key cases relating to studies of high-redshift galaxies, galaxy evolution, and stellar populations, with a more expansive presentation of a new case relating to detection of exoplanets in stellar clusters. A general requirement is the need for two observational modes to best exploit the large (>40 sq. arcmin) patrol field of the E-ELT. The first mode ('high multiplex') requires integrated-light (or coarsely resolved) optical/near-IR spectroscopy of >100 objects simultaneously. The second ('high definition'), enabled by wide-field adaptive optics, requires spatially-resolved, near-IR of >10 objects/sub-fields. Within the context of the conceptual study for an ELT-MOS called MO...

  15. In silico identification of conserved microRNAs in large number of diverse plant species

    Directory of Open Access Journals (Sweden)

    Jagadeeswaran Guru

    2008-04-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are recently discovered small non-coding RNAs that play pivotal roles in gene expression, specifically at the post-transcriptional level in plants and animals. Identification of miRNAs in large number of diverse plant species is important to understand the evolution of miRNAs and miRNA-targeted gene regulations. Now-a-days, publicly available databases play a central role in the in-silico biology. Because, at least ~21 miRNA families are conserved in higher plants, a homology based search using these databases can help identify orthologs or paralogs in plants. Results We searched all publicly available nucleotide databases of genome survey sequences (GSS, high-throughput genomics sequences (HTGS, expressed sequenced tags (ESTs and nonredundant (NR nucleotides and identified 682 miRNAs in 155 diverse plant species. We found more than 15 conserved miRNA families in 11 plant species, 10 to14 families in 10 plant species and 5 to 9 families in 29 plant species. Nineteen conserved miRNA families were identified in important model legumes such as Medicago, Lotus and soybean. Five miRNA families – miR319, miR156/157, miR169, miR165/166 and miR394 – were found in 51, 45, 41, 40 and 40 diverse plant species, respectively. miR403 homologs were found in 16 dicots, whereas miR437 and miR444 homologs, as well as the miR396d/e variant of the miR396 family, were found only in monocots, thus providing large-scale authenticity for the dicot- and monocot-specific miRNAs. Furthermore, we provide computational and/or experimental evidence for the conservation of 6 newly found Arabidopsis miRNA homologs (miR158, miR391, miR824, miR825, miR827 and miR840 and 2 small RNAs (small-85 and small-87 in Brassica spp. Conclusion Using all publicly available nucleotide databases, 682 miRNAs were identified in 155 diverse plant species. By combining the expression analysis with the computational approach, we found that 6 miRNAs and 2

  16. XCAN project : coherent beam combining of large number fibers in femtosecond regime (Conference Presentation)

    Science.gov (United States)

    Antier, Marie; Le Dortz, Jeremy; Bourderionnet, Jerome; Larat, Christian; Lallier, Eric; Daniault, Louis; Fsaifes, Ihsan; Heilmann, Anke; Bellanger, Severine; Simon-Boisson, Christophe; Chanteloup, Jean-Christophe; Brignon, Arnaud

    2016-10-01

    The XCAN project, which is a three years project and began in 2015, carried out by Thales and the Ecole Polytechnique aims at developing a laser system based on the coherent combination of laser beams produced through a network of amplifying optical fibers. This technique provides an attractive mean of reaching simultaneously the high peak and high average powers required for various industrial, scientific and defense applications. The architecture has to be compatible with very large number of fibers (1000-10000). The goal of XCAN is to overcome all the key scientific and technological barriers to the design and development of an experimental laser demonstrator. The coherent addition of multiple individual phased beams is aimed to provide tens of Gigawatt peak power at 50 kHz repetition rate. Coherent beam combining (CBC) of fiber amplifiers involves a master oscillator which is split into N fiber channels and then amplified through series of polarization maintaining fiber pre-amplifiers and amplifiers. In the so-called tiled aperture configuration, the N fibers are arranged in an array and collimated in the near field of the laser output. The N beamlets then interfere constructively in the far field, and give a bright central lobe. CBC techniques with active phase locking involve phase mismatch detection, calculation of the correction and phase compensation of each amplifier by means of phase modulators. Interferometric phase measurement has proven to be particularly well suited to phase-lock a very large number of fibers in continuous regime. A small fraction of the N beamlets is imaged onto a camera. The beamlets interfere separately with a reference beam. The phase mismatch of each beam is then calculated from the interferences' position. In this presentation, we demonstrate the phase locking of 19 fibers in femtosecond pulse regime with this technique. In our first experiment, a master oscillator generates pulses of 300 fs (chirped at 200 ps). The beam is

  17. On the chromatic number of pentagon-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2007-01-01

    We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...

  18. Physicochemical characterization of smoke aerosol during large-scale wildfires: Extreme event of August 2010 in Moscow

    Science.gov (United States)

    Popovicheva, O.; Kistler, M.; Kireeva, E.; Persiantseva, N.; Timofeev, M.; Kopeikin, V.; Kasper-Giebl, A.

    2014-10-01

    Enhancement of biomass burning-related research is essential for the assessment of large-scale wildfires impact on pollution at regional and global scale. Starting since 6 August 2010 Moscow was covered with thick smoke of unusually high PM10 and BC concentrations, considerably affected by huge forest and peat fires around megacity. This work presents the first comprehensive physico-chemical characterization of aerosols during extreme smoke event in Moscow in August 2010. Sampling was performed in the Moscow center and suburb as well as one year later, in August 2011 during a period when no biomass burning was observed. Small-scale experimental fires of regional biomass were conducted in the Moscow region. Carbon content, functionalities of organic/inorganic compounds, tracers of biomass burning (anhydrosaccharides), ionic composition, and structure of smoke were analyzed by thermal-optical analysis, FTIR spectroscopy, liquid and ion chromatography, and electron microscopy. Carbonaceous aerosol in August 2010 was dominated by organic species with elemental carbon (EC) as minor component. High average OC/EC near 27.4 is found, comparable to smoke of regional biomass smoldering fire, and exceeded 3 times the value observed in August 2011. Organic functionalities of Moscow smoke aerosols were hydroxyl, aliphatic, aromatic, acid and non-acid carbonyl, and nitro compound groups, almost all of them indicate wildfires around city as the source of smoke. The ratio of levoglucosan (LG) to mannosan near 5 confirms the origin of smoke from coniferous forest fires around megacity. Low ratio of LG/OC near 0.8% indicates the degradation of major molecular tracer of biomass burning in urban environment. Total concentration of inorganic ions dominated by sulfates SO2- and ammonium NH was found about 5 times higher during large-scale wildfires than in August 2011. Together with strong sulfate and ammonium absorbance in smoke aerosols, these observations prove the formation of

  19. Investigating the evolution of Shared Socioeconomic Pathways with a large number of scenarios

    Science.gov (United States)

    Schweizer, V. J.; Guivarch, C.; Rozenberg, J.

    2013-12-01

    The new scenario framework for climate change research includes alternative possible trends for socioeconomic development called Shared Socioeconomic Pathways (SSPs). The SSPs bear some similarities to other scenarios used for global change research, but they also have important differences. Like the IPCC Special Report on Emissions Scenarios or the Millennium Ecosystem Assessment, SSPs are defined by a scenario logic consisting of two axes. However, these axes define SSPs with respect to their location in an outcome space for challenges to mitigation and to adaptation rather than by their drivers. Open questions for the SSPs include what their drivers are and how the time dimension could be interpreted with the outcomes space. We present a new analytical approach for addressing both questions by studying large numbers of scenarios produced by an integrated assessment model, IMACLIM-R. We systematically generated 432 scenarios and used the SSP framework to classify them by typology. We then analyzed them dynamically, tracing their evolution through the SSP challenges space at annual time steps over the period 2010-2090. Through this approach, we found that many scenarios do not remain fixed to a particular SSP domain; they drift from one domain to another. In papers describing the framework for new scenarios, SSPs are envisioned as hypothetical (counter-factual) reference scenarios that remain fixed in one domain over some time period of interest. However, we conclude that it may be important to also research scenarios that shift across SSP domains. This is relevant for another open question, which is what scenarios are important to explore given their consequences. Through a data mining technique, we uncovered prominent drivers for scenarios that shift across SSP domains. Scenarios with different challenges for adaptation and mitigation (that is, mitigation and adaptation challenges that are not co-varying) were found to be the least stable, and the following

  20. Nonclassical light from a large number of independent single-photon emitters

    CERN Document Server

    Lachman, Lukáš; Filip, Radim

    2016-01-01

    Nonclassical quantum effects gradually reach domains of physics of large systems previously considered as purely classical. We derive a hierarchy of operational criteria suitable for a reliable detection of nonclassicality of light from an arbitrarily large ensemble of independent single-photon emitters. We show, that such large ensemble can always emit nonclassical light without any phase reference and under realistic experimental conditions including incoherent background noise. The nonclassical light from the large ensemble of the emitters can be witnessed much better than light coming from a single or a few emitters.

  1. Large reptiles and cold temperatures: Do extreme cold spells set distributional limits for tropical reptiles in Florida?

    Science.gov (United States)

    Mazzotti, Frank J.; Cherkiss, Michael S.; Parry, Mark; Beauchamp, Jeff; Rochford, Mike; Smith, Brian J.; Hart, Kristen M.; Brandt, Laura A.

    2016-01-01

    Distributional limits of many tropical species in Florida are ultimately determined by tolerance to low temperature. An unprecedented cold spell during 2–11 January 2010, in South Florida provided an opportunity to compare the responses of tropical American crocodiles with warm-temperate American alligators and to compare the responses of nonnative Burmese pythons with native warm-temperate snakes exposed to prolonged cold temperatures. After the January 2010 cold spell, a record number of American crocodiles (n = 151) and Burmese pythons (n = 36) were found dead. In contrast, no American alligators and no native snakes were found dead. American alligators and American crocodiles behaved differently during the cold spell. American alligators stopped basking and retreated to warmer water. American crocodiles apparently continued to bask during extreme cold temperatures resulting in lethal body temperatures. The mortality of Burmese pythons compared to the absence of mortality for native snakes suggests that the current population of Burmese pythons in the Everglades is less tolerant of cold temperatures than native snakes. Burmese pythons introduced from other parts of their native range may be more tolerant of cold temperatures. We documented the direct effects of cold temperatures on crocodiles and pythons; however, evidence of long-term effects of cold temperature on their populations within their established ranges remains lacking. Mortality of crocodiles and pythons outside of their current established range may be more important in setting distributional limits.

  2. Stellar metallicities beyond the Local Group: the potential of J-band spectroscopy with extremely large telescopes

    CERN Document Server

    Evans, C J; Kudritzki, R -P; Puech, M; Yang, Y; Cuby, J -G; Figer, D F; Lehnert, M D; Morris, S L; Rousset, G

    2010-01-01

    We present simulated J-band spectroscopy of red giants and supergiants with a 42m European Extremely Large Telescope (E-ELT), using tools developed toward the EAGLE Phase A instrument study. The simulated spectra are used to demonstrate the validity of the 1.15-1.22 micron region to recover accurate stellar metallicities from Solar and metal-poor (one tenth Solar) spectral templates. From tests at spectral resolving powers of four and ten thousand, we require continuum signal-to-noise ratios in excess of 50 (per two-pixel resolution element) to recover the input metallicity to within 0.1 dex. We highlight the potential of direct estimates of stellar metallicites (over the range -1<[Fe/H]<0) of red giants with the E-ELT, reaching out to distances of ~5 Mpc for stars near the tip of the red giant branch. The same simulations are also used to illustrate the potential for quantitative spectroscopy of red supergiants beyond the Local Volume to tens of Mpc. Calcium triplet observations in the I-band are also ...

  3. An Extreme Metallicity, Large-Scale Outflow from a Star-Forming Galaxy at z ~ 0.4

    CERN Document Server

    Muzahid, Sowgat; Churchil, Christopher W; Charlton, Jane C; Nielsen, Nikole M; Mathes, Nigel L; Trujillo-Gomez, Sebastian

    2015-01-01

    We present a detailed analysis of a large-scale galactic outflow in the CGM of a massive (M_h ~ 10^12.5 Msun), star-forming (6.9 Msun/yr), sub-L* (0.5 L_B*) galaxy at z=0.39853 that exhibits a wealth of metal-line absorption in the spectra of the background quasar Q 0122-003 at an impact parameter of 163 kpc. The galaxy inclination angle (i=63 degree) and the azimuthal angle (Phi=73 degree) imply that the QSO sightline is passing through the projected minor-axis of the galaxy. The absorption system shows a multiphase, multicomponent structure with ultra-strong, wide velocity spread OVI (logN = 15.16\\pm0.04, V_{90} = 419 km/s) and NV (logN = 14.69\\pm0.07, V_{90} = 285 km/s) lines that are extremely rare in the literature. The highly ionized absorption components are well explained as arising in a low density (10^{-4.2} cm^{-3}), diffuse (10 kpc), cool (10^4 K) photoionized gas with a super-solar metallicity ([X/H] > 0.3). From the observed narrowness of the Lyb profile, the non-detection of SIV absorption, and...

  4. Charge compensation in extremely large magnetoresistance materials LaSb and LaBi revealed by first-principles calculations

    Science.gov (United States)

    Guo, Peng-Jie; Yang, Huan-Cheng; Zhang, Bing-Jing; Liu, Kai; Lu, Zhong-Yi

    2016-06-01

    By the first-principles electronic structure calculations, we have systematically studied the electronic structures of recently discovered extremely large magnetoresistance (XMR) materials LaSb and LaBi. We find that both LaSb and LaBi are semimetals with the electron and hole carriers in balance. The calculated carrier densities on the order of 1020cm-3 are in good agreement with the experimental values, implying long mean-free time of carriers at low temperatures and thus high carrier mobilities. With a semiclassical two-band model, the charge compensation and high carrier mobilities naturally explain: (i) the XMR observed in LaSb and LaBi, (ii) the nonsaturating quadratic dependence of XMR on an external magnetic field, and (iii) the resistivity plateau in the turn-on temperature behavior at very low temperatures. The explanation of these features without resorting to the topological effect indicates that they should be the common characteristics of all electron-hole compensated semimetals.

  5. Early Science with the Large Millimeter Telescope: Observations of Extremely Luminous High-z Sources Identified by Planck

    CERN Document Server

    Harrington, K C; Cybulski, R; Wilson, G W; Aretxaga, I; Chavez, M; De la Luz, V; Erickson, N; Ferrusca, D; Gallup, A D; Hughes, D H; Montaña, A; Narayanan, G; Sánchez-Argüelles, D; Schloerb, F P; Souccar, K; Terlevich, E; Terlevich, R; Zeballos, M; Zavala, J A

    2016-01-01

    We present 8.5 arcsec resolution 1.1mm continuum imaging and CO spectroscopic redshift measurements of eight extremely bright submillimetre galaxies identified from the Planck and Herschel surveys, taken with the Large Millimeter Telescope's AzTEC and Redshift Search Receiver instruments. We compiled a candidate list of high redshift galaxies by cross-correlating the Planck Surveyor mission's highest frequency channel (857 GHz, FWHM = 4.5 arcmin) with the archival Herschel Spectral and Photometric Imaging Receiver (SPIRE) imaging data, and requiring the presence of a unique, single Herschel counterpart within the 150 arcsec search radius of the Planck source positions with 350 micron flux density larger than 100 mJy, excluding known blazars and foreground galaxies. All eight candidate objects observed are detected in 1.1mm continuum by AzTEC bolometer camera, and at least one CO line is detected in all cases with a spectroscopic redshift between 1.3 < z(CO) < 3.3. Their infrared spectral energy distribu...

  6. Large-strain time-temperature equivalence in high density polyethylene for prediction of extreme deformation and damage

    Directory of Open Access Journals (Sweden)

    Gray G.T.

    2012-08-01

    Full Text Available Time-temperature equivalence is a widely recognized property of many time-dependent material systems, where there is a clear predictive link relating the deformation response at a nominal temperature and a high strain-rate to an equivalent response at a depressed temperature and nominal strain-rate. It has been found that high-density polyethylene (HDPE obeys a linear empirical formulation relating test temperature and strain-rate. This observation was extended to continuous stress-strain curves, such that material response measured in a load frame at large strains and low strain-rates (at depressed temperatures could be translated into a temperature-dependent response at high strain-rates and validated against Taylor impact results. Time-temperature equivalence was used in conjuction with jump-rate compression tests to investigate isothermal response at high strain-rate while exluding adiabatic heating. The validated constitutive response was then applied to the analysis of Dynamic-Tensile-Extrusion of HDPE, a tensile analog to Taylor impact developed at LANL. The Dyn-Ten-Ext test results and FEA found that HDPE deformed smoothly after exiting the die, and after substantial drawing appeared to undergo a pressure-dependent shear damage mechanism at intermediate velocities, while it fragmented at high velocities. Dynamic-Tensile-Extrusion, properly coupled with a validated constitutive model, can successfully probe extreme tensile deformation and damage of polymers.

  7. Large woody debris mobility and accumulation by an extreme flood - an example from the Dyje River, Czech Republic

    Science.gov (United States)

    Macka, Zdenek; Krejci, Lukas

    2010-05-01

    Large woody debris (LWD) in the form of logs, branches and their fragments play an important geomorphic and ecological role in forested watersheds. Especially when organized in accumulations and jams, LWD have been found to change hydraulic, morphological, sedimentary and biological characteristics of fluvial ecosystems. Our study focuses on LWD jams distribution and properties within the 44 km long forested reach of the Dyje River in south-eastern Czech Republic. The study reach is located between two large water reservoirs and the flow is regulated showing significant daily fluctuation of discharges due to water releases for power generation. River flows in the deeply incised meandering valley with the narrow and patchy floodplain. In 2002, and especially 2006 large volumes of LWD have been transported by river and the water reservoir downstream was congested with wood. Peak discharge of 2006 flood equalled 306 m3.s-1 which was estimated as 500 year flood. The flood caused significant mobility and redistribution of woody debris as in aquatic, so in riparian segment of the river corridor. The high rate of LWD transport is favoured by large bankfull channel width which exceeds the average tree height. LWD jams were defined as aggregations of three or more wood pieces with diameter ≥ 0.1 m and length ≥ 1 m. We surveyed LWD jams in 62 river reaches, which have been located at meander apexes, inflections and intermediate positions; the length of the reaches was 200 m. The overall number of registered LWD jams was 200. Majority of jams consist of solely allochthonous (transported) wood pieces (65 %), some jams are combination of large key trees and trapped transported pieces (29%), and only small proportion are jams formed by locally uprooted trees (12,6%). Number of wood pieces varies greatly from 3 to 98, the most common being the interval 5 - 10 pieces per jam. Spatial distribution of jams is longitudinally and transversally irregular within the river corridor

  8. A new modification of combining vacuum therapy and brachytherapy in large subfascial soft-tissue sarcomas of the extremities

    Energy Technology Data Exchange (ETDEWEB)

    Rudert, Maximilian; Holzapfel, Boris Michael [Dept. of Orthopedics, Univ. of Wuerzburg (Germany); Dept. of Orthopedics and Traumatology, Klinikum Muenchen rechts der Isar, Technical Univ. of Munich (Germany); Winkler, Cornelia; Kneschaurek, Peter; Molls, Michael; Roeper, Barbara [Dept. of Radiation Oncology, Technical Univ. of Munich (Germany); Rechl, Hans; Gradinger, Reiner [Dept. of Orthopedics and Traumatology, Klinikum Muenchen rechts der Isar, Technical Univ. of Munich (Germany)

    2010-04-15

    Purpose: To present a modification of a technique combining the advantages of brachytherapy for local radiation treatment and vacuum therapy for wound conditioning after resection of subfascial soft-tissue sarcomas (STS) of the extremities. Patients and methods: Between January and May 2008, four patients with large (> 10 cm) subfascial STS of the thigh underwent marginal tumor excision followed by early postoperative HDR (high-dose-rate) brachytherapy (iridium-192) and vacuum therapy as part of their interdisciplinary treatment. The sponge of the vacuum system was used to stabilize brachytherapy applicators in parallel positions and to allow for a maximal wound contraction in the early postoperative phase, thus preventing seroma and deterioration of local dose distribution as optimized in computed tomography-(CT-)based three-dimensional conformal treatment planning. In three patients this was followed by external-beam radiotherapy. Acute wound complications and late effects according to LENT-SOMA after 4-8 months of follow-up were recorded. Results: the combination of vacuum and brachytherapy was applicable in all patients. CT scans from the 1st postoperative day showed the shrinkage of the sponge located in the tumor bed with the brachytherapy applicators in the intended position and easily visible. 15-18 Gy in fractions of 3 Gy bid prescribed to 5 mm tissue depth were applied over the next days with removal of the sponge and applicators on days 5-8. No early or late toxicity exceeding grade 2 was observed. The mean Enneking Score for functional outcome was 63% (perfect function = 100%). Conclusion: The combination of vacuum and brachytherapy is applicable and safe in the treatment of large subfascial STS. (orig.)

  9. Design of a prototype position actuator for the primary mirror segments of the European Extremely Large Telescope

    Science.gov (United States)

    Jiménez, A.; Morante, E.; Viera, T.; Núñez, M.; Reyes, M.

    2010-07-01

    European Extremely Large Telescope (E-ELT) based in 984 primary mirror segments achieving required optical performance; they must position relatively to adjacent segments with relative nanometer accuracy. CESA designed M1 Position Actuators (PACT) to comply with demanding performance requirements of EELT. Three PACT are located under each segment controlling three out of the plane degrees of freedom (tip, tilt, piston). To achieve a high linear accuracy in long operational displacements, PACT uses two stages in series. First stage based on Voice Coil Actuator (VCA) to achieve high accuracies in very short travel ranges, while second stage based on Brushless DC Motor (BLDC) provides large stroke ranges and allows positioning the first stage closer to the demanded position. A BLDC motor is used achieving a continuous smoothly movement compared to sudden jumps of a stepper. A gear box attached to the motor allows a high reduction of power consumption and provides a great challenge for sizing. PACT space envelope was reduced by means of two flat springs fixed to VCA. Its main characteristic is a low linear axial stiffness. To achieve best performance for PACT, sensors have been included in both stages. A rotary encoder is included in BLDC stage to close position/velocity control loop. An incremental optical encoder measures PACT travel range with relative nanometer accuracy and used to close the position loop of the whole actuator movement. For this purpose, four different optical sensors with different gratings will be evaluated. Control strategy show different internal closed loops that work together to achieve required performance.

  10. APE: the Active Phasing Experiment to test new control system and phasing technology for a European Extremely Large Optical Telescope

    Science.gov (United States)

    Gonte, F.; Yaitskova, N.; Derie, F.; Constanza, A.; Brast, R.; Buzzoni, B.; Delabre, B.; Dierickx, P.; Dupuy, C.; Esteves, R.; Frank, C.; Guisard, S.; Karban, R.; Koenig, E.; Kolb, J.; Nylund, M.; Noethe, L.; Surdej, I.; Courteville, A.; Wilhelm, R.; Montoya, L.; Reyes, M.; Esposito, S.; Pinna, E.; Dohlen, K.; Ferrari, M.; Langlois, M.

    2005-08-01

    The future European Extremely Large Telescope will be composed of one or two giant segmented mirrors (up to 100 m of diameter) and of several large monolithic mirrors (up to 8 m in diameter). To limit the aberrations due to misalignments and defective surface quality it is necessary to have a proper active optics system. This active optics system must include a phasing system to limit the degradation of the PSF due to misphasing of the segmented mirrors. We will present the lastest design and development of the Active Phasing Experiment that will be tested in laboratory and on-sky connected to a VLT at Paranal in Chile. It includes an active segmented mirror, a static piston plate to simulate a secondary segmented mirror and of four phasing wavefront sensors to measure the piston, tip and tilt of the segments and the aberrations of the VLT. The four phasing sensors are the Diffraction Image Phase Sensing Instrument developed by Instituto de Astrofisica de Canarias, the Pyramid Phasing Sensor developed by Arcetri Astrophysical Observatory, the Shack-Hartmann Phasing Sensor developed by the European Southern Observatory and the Zernike Unit for Segment phasing developed by Laboratoire d'Astrophysique de Marseille. A reference measurement of the segmented mirror is made by an internal metrology developed by Fogale Nanotech. The control system of Active Phasing Experiment will perform the phasing of the segments, the guiding of the VLT and the active optics of the VLT. These activities are included in the Framework Programme 6 of the European Union.

  11. Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations

    Energy Technology Data Exchange (ETDEWEB)

    González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)

    2017-06-28

    We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.

  12. Exact two-dimensionalization of rapidly rotating large-Reynolds-number flows

    CERN Document Server

    Gallet, Basile

    2015-01-01

    We consider the flow of a Newtonian fluid in a three-dimensional domain, rotating about a vertical axis and driven by a vertically invariant horizontal body-force. This system admits vertically invariant solutions that satisfy the 2D Navier-Stokes equation. At high Reynolds number and without global rotation, such solutions are usually unstable to three-dimensional perturbations. By contrast, for strong enough global rotation, we prove rigorously that the 2D (and possibly turbulent) solutions are stable to vertically dependent perturbations: the flow becomes 2D in the long-time limit. These results shed some light on several fundamental questions of rotating turbulence: for arbitrary Reynolds number and small enough Rossby number, the system is attracted towards purely 2D flow solutions, which display no energy dissipation anomaly and no cyclone-anticyclone asymmetry. Finally, these results challenge the applicability of wave turbulence theory to describe stationary rotating turbulence in bounded domains.

  13. Pupil plane optimization for single-mode multiaxial optical interferometry with a large number of telescopes

    CERN Document Server

    Le Bouquin, J B; Bouquin, Jean-Baptiste Le; Tatulli, Eric

    2006-01-01

    Incoming optical interferometers will allow spectro-imaging at high angular resolution. Non-homothetic Fizeau concept combines good sensitivity and high spectral resolution capabilities. However, one critical issue is the design of the beam recombination scheme, at the heart of the instrument. We tackle the possibility of reducing the number of pixels that are coding the fringes by compressing the pupil plane. Shrinking the number of pixels -- which drastically increases with the number of recombined telescopes -- is indeed a key issue that enables to reach higher limiting magnitude, but also allows to lower the required spectral resolution and fasten the fringes reading process. By means of numerical simulations, we study the performances of existing estimators of the visibility with respect to the compression process. We show that, not only the model based estimator lead to better signal to noise ratio (SNR) performances than the Fourier ones, but above all it is the only one which prevent from introducing ...

  14. Large wood recruitment processes and transported volumes in Swiss mountain streams during the extreme flood of August 2005

    Science.gov (United States)

    Steeb, Nicolas; Rickenmann, Dieter; Badoux, Alexandre; Rickli, Christian; Waldner, Peter

    2017-02-01

    The extreme flood event that occurred in August 2005 was the most costly (documented) natural hazard event in the history of Switzerland. The flood was accompanied by the mobilization of > 69,000 m3 of large wood (LW) throughout the affected area. As recognized afterward, wood played an important role in exacerbating the damages, mainly because of log jams at bridges and weirs. The present study aimed at assessing the risk posed by wood in various catchments by investigating the amount and spatial variability of recruited and transported LW. Data regarding LW quantities were obtained by field surveys, remote sensing techniques (LiDAR), and GIS analysis and was subsequently translated into a conceptual model of wood transport mass balance. Detailed wood budgets and transport diagrams were established for four study catchments of Swiss mountain streams, showing the spatial variability of LW recruitment and deposition. Despite some uncertainties with regard to parameter assumptions, the sum of reconstructed wood input and observed deposition volumes agree reasonably well. Mass wasting such as landslides and debris flows were the dominant recruitment processes in headwater streams. In contrast, LW recruitment from lateral bank erosion became significant in the lower part of mountain streams where the catchment reached a size of about 100 km2. According to our analysis, 88% of the reconstructed total wood input was fresh, i.e., coming from living trees that were recruited from adjacent areas during the event. This implies an average deadwood contribution of 12%, most of which was estimated to have been in-channel deadwood entrained during the flood event.

  15. Large velocity fluctuations in small-Reynolds-number pipe flow of polymer solutions

    NARCIS (Netherlands)

    Bonn, D.; Ingremeau, F.; Amarouchene, Y.; Kellay, H.

    2011-01-01

    The flow of polymer solutions is examined in a flow geometry where a jet is used to inject the viscoelastic solution into a cylindrical tube. We show that this geometry allows for the generation of a "turbulentlike" flow at very low Reynolds numbers with a fluctuation level which can be as high as 3

  16. Thin-disk laser pump schemes for large number of passes and moderate pump source quality

    Science.gov (United States)

    Schuhmann, Karsten; Hänsch, Theodor W.; Kirch, Klaus; Knecht, Andreas; Kottmann, Franz; Nez, Francois; Pohl, Randolf; Taqqu, David; Antognini, Aldo

    2015-11-01

    Novel thin-disk laser pump layouts are proposed yielding an increased number of passes for a given pump module size and pump source quality. These novel layouts result from a general scheme which bases on merging two simpler pump optics arrangements. Some peculiar examples can be realized by adapting standard commercially available pump optics simply by intro ducing an additional mirror-pair. More pump passes yield better efficiency, opening the way for usage of active materials with low absorption. In a standard multi-pass pump design, scaling of the number of beam passes brings ab out an increase of the overall size of the optical arrangement or an increase of the pump source quality requirements. Such increases are minimized in our scheme, making them eligible for industrial applications

  17. Thin-disk laser pump schemes for large number of passes and moderate pump source quality.

    Science.gov (United States)

    Schuhmann, Karsten; Hänsch, Theodor W; Kirch, Klaus; Knecht, Andreas; Kottmann, Franz; Nez, Francois; Pohl, Randolf; Taqqu, David; Antognini, Aldo

    2015-11-10

    Thin-disk laser pump layouts yielding an increased number of passes for a given pump module size and pump source quality are proposed. These layouts result from a general scheme based on merging two simpler pump optics arrangements. Some peculiar examples can be realized by adapting standard, commercially available pump optics with an additional mirror pair. More pump passes yield better efficiency, opening the way for the usage of active materials with low absorption. In a standard multipass pump design, scaling of the number of beam passes brings about an increase in the overall size of the optical arrangement or an increase in the pump source quality requirements. Such increases are minimized in our scheme, making them eligible for industrial applications.

  18. Thin-disk laser pump schemes for large number of passes and moderate pump source quality

    CERN Document Server

    Schuhmann, K; Kirch, K; Knecht, A; Kottmann, F; Nez, F; Pohl, R; Taqqu, D; Antognini, A

    2015-01-01

    Novel thin-disk laser pump layouts are proposed yielding an increased number of passes for a given pump module size and pump source quality. These novel layouts result from a general scheme which bases on merging two simpler pump optics arrangements. Some peculiar examples can be realized by adapting standard commercially available pump optics simply by intro ducing an additional mirror-pair. More pump passes yield better efficiency, opening the way for usage of active materials with low absorption. In a standard multi-pass pump design, scaling of the number of beam passes brings ab out an increase of the overall size of the optical arrangement or an increase of the pump source quality requirements. Such increases are minimized in our scheme, making them eligible for industrial applications

  19. A comment on "bats killed in large numbers at United States wind energy facilities"

    Science.gov (United States)

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.

  20. Q-factorial Gorenstein toric Fano varieties with large Picard number

    DEFF Research Database (Denmark)

    Nill, Benjamin; Øbro, Mikkel

    2010-01-01

    In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...

  1. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    Science.gov (United States)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  2. Efficient high speed communications over electrical powerlines for a large number of users

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.; Tripathi, K.; Latchman, H.A. [Florida Univ., Gainesville, FL (United States). Dept. of Electrical and Computer Engineering

    2007-07-01

    Affordable broadband Internet communication is currently available for residential use via cable modem and other forms of digital subscriber lines (DSL). Powerline communication (PLC) systems were never considered seriously for communications due to their low speed and high development cost. However, due to technological advances PLCs are now spreading to local area networks and broadband over power line systems. This paper presented a newly proposed modification to the standard HomePlug 1.0 MAC protocol to make it a constant contention window-based scheme. The HomePlug 1.0 was developed based on orthogonal frequency division multiplexing (OFDM) and carrier sense multiple access with collision avoidance (CSMA/CA). It is currently the most commonly used technology of power line communications, supporting a transmission rate of up to 14 Mbps on the power line. However, the throughput performance of this original scheme becomes critical when the number of users increases. For that reason, a constant contention window based medium access control protocol algorithm of HomePlug 1.0 was proposed under the assumption that the number of active stations is known. An analytical framework based on Markov Chains was developed in order to model this modified protocol under saturation conditions. Modeling results accurately matched the actual performance of the system. This paper revealed that the performance can be improved significantly if the variables were parameterized in terms of the number of active stations. 15 refs., 1 tab., 6 figs.

  3. Distributed calculation method for large-pixel-number holograms by decomposition of object and hologram planes.

    Science.gov (United States)

    Jackin, Boaz Jessie; Miyata, Hiroaki; Ohkawa, Takeshi; Ootsu, Kanemitsu; Yokota, Takashi; Hayasaki, Yoshio; Yatagai, Toyohiko; Baba, Takanobu

    2014-12-15

    A method has been proposed to reduce the communication overhead in computer-generated hologram (CGH) calculations on parallel and distributed computing devices. The method uses the shifting property of Fourier transform to decompose calculations, thereby avoiding data dependency and communication. This enables the full potential of parallel and distributed computing devices. The proposed method is verified by simulation and optical experiments and can achieve a 20 times speed improvement compared to conventional methods, while using large data sizes.

  4. STRONG LAW OF LARGE NUMBERS AND SHANNON-MCMILLAN THEOREM FOR MARKOV CHAINS FIELD ON CAYLEY TREE

    Institute of Scientific and Technical Information of China (English)

    杨卫国; 刘文

    2001-01-01

    This paper studies the strong law of large numbers and the Shannom-McMillan theorem for Markov chains field on Cayley tree. The authors first prove the strong law of large number on the frequencies of states and orderd couples of states for Markov chains field on Cayley tree. Then they prove thc Shannon-McMillan theorem with a.e. convergence for Markov chains field on Cayley tree. In the proof, a new technique in the study the strong limit theorem in probability theory is applied.

  5. Tree Code for Collision Detection of Large Numbers of Particles Application for the Breit-Wheeler Process

    CERN Document Server

    Jansen, Oliver; Ribeyre, Xavier; Jequier, Sophie; Tikhonchuk, Vladimir

    2016-01-01

    Collision detection of a large number N of particles can be challenging. Directly testing N particles for collision among each other leads to N 2 queries. Especially in scenarios, where fast, densely packed particles interact, challenges arise for classical methods like Particle-in-Cell or Monte-Carlo. Modern collision detection methods utilising bounding volume hierarchies are suitable to overcome these challenges and allow a detailed analysis of the interaction of large number of particles. This approach is applied to the analysis of the collision of two photon beams leading to the creation of electron-positron pairs.

  6. Detailed Measurements of Turbulent Rayleigh-Taylor Mixing at Large and Small Atwood Numbers

    Energy Technology Data Exchange (ETDEWEB)

    Malcolm J. Andrews, Ph.D.

    2004-12-14

    This project has two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. Also, studies of multi-layer mixing with the existing water channel facility. Over the last twelve (12) months there has been excellent progress, detailed in this report, with both tasks. As of December 10, 2004, the air/helium facility is now complete and extensive testing and validation of diagnostics has been performed. Currently experiments with air/helium up to Atwood numbers of 0.25 (the maximum is 0.75, but the highest Reynolds numbers are at 0.25) are being performed. The progress matches the project plan, as does the budget, and we expect this to continue for 2005. With interest expressed from LLNL we have continued with initial condition studies using the water channel. This work has also progressed well, with one of the graduate Research Assistants (Mr. Nick Mueschke) visiting LLNL the past two summers to work with Dr. O. Schilling. Several journal papers are in preparation that describe the work. Two MSc.'s have been completed (Mr. Nick Mueschke, and Mr. Wayne Kraft, 12/1/03). Nick and Wayne are both pursuing Ph.D.s' funded by this DOE Alliances project. Presently three (3) Ph.D. graduate Research Assistants are supported on the project, and two (2) undergraduate Research Assistants. During the year two (2) journal papers and two (2) conference papers have been published, ten (10) presentations made at conferences, and three (3) invited presentations.

  7. High School Timetabling: Modeling and solving a large number of cases in Denmark

    DEFF Research Database (Denmark)

    Sørensen, Matias; Stidsen, Thomas Riis

    2012-01-01

    A general model for the timetabling problem of high schools in Denmark is introduced, as seen from the perspective of the commercial system Lectio1, and an Adaptive Large Neighborhood Search (ALNS) algorithm is proposed for producing solutions. Lectio is a general-purpose cloud-based system...... for high school administration (available only for Danish high schools), which includes an embedded application for creating a weekly timetable. Currently, 230 high schools are customers of Lectio, and 191 have bought access to the timetabling software. This constitutes the majority of high schools...

  8. Ecological specialization and rarity indices estimated for a large number of plant species in France

    Directory of Open Access Journals (Sweden)

    Samira Mobaied

    2015-06-01

    Here, we present a list of specialization and rarity values for more than 2800 plant species of continental France, which were computed from the large botanical and ecological dataset SOPHY. Three specialization indices were calculated using species co-occurrence data. All three indices are based on (dissimilarity among plant communities containing a focal species, quantified either as beta diversity in an additive (Fridley et al., 2007 [6] or multiplicative (Zeleny, 2008 [15] partitioning of diversity or as the multiple site similarity of Baselga et al. (2007 [1]. Species rarity was calculated as the inverse of a species occurrence.

  9. Analyzing the Large Number of Variables in Biomedical and Satellite Imagery

    CERN Document Server

    Good, Phillip I

    2011-01-01

    This book grew out of an online interactive offered through statcourse.com, and it soon became apparent to the author that the course was too limited in terms of time and length in light of the broad backgrounds of the enrolled students. The statisticians who took the course needed to be brought up to speed both on the biological context as well as on the specialized statistical methods needed to handle large arrays. Biologists and physicians, even though fully knowledgeable concerning the procedures used to generate microaarrays, EEGs, or MRIs, needed a full introduction to the resampling met

  10. Location, number and morphology of parathyroid glands: results from a large anatomical series.

    Science.gov (United States)

    Lappas, Dimitrios; Noussios, George; Anagnostis, Panagiotis; Adamidou, Fotini; Chatzigeorgiou, Antonios; Skandalakis, Panagiotis

    2012-09-01

    Surgical management of parathyroid gland disease may sometimes be difficult, due mainly to the surgeon's failure to successfully detect parathyroids in unusual locations. The records of 942 cadavers (574 men and 368 women) who underwent autopsy in the Department of Forensic Medicine in Athens during the period 1988-2009 were reviewed. In total, 3,796 parathyroid glands were resected and histologically verified. Parathyroid glands varied in number. In 47 cases (5 %), one supernumerary (fifth) parathyroid was found, while in 19 cases (2 %) three parathyroid glands found. Superior glands were larger than inferior ones. However, there was no significant difference between the genders with respect to gland size. In 324 (8.5 %) out of 3,796, the glands were detected in an ectopic location: 7 (0.2 %) in the thyroid parenchyma, 79 (2 %) in different sites in the neck and 238 (6.3 %) in the mediastinum, 152 (4.1 %) of which were found in the upper and 86 (2.2 %) in the lower mediastinum. Significant anatomical variations of normal parathyroid glands may exist regarding number and location-knowledge that is essential for their successful identification and surgical management.

  11. Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2010-01-01

    Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary la...... density from the ideal gas law versus real gas data. In both cases the effect was found to be negligible.......Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary....... The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas...

  12. Ionic current correlations underlie the global tuning of large numbers of neuronal activity attributes.

    Science.gov (United States)

    Zhao, Shunbing; Golowasch, Jorge

    2012-09-26

    Ionic conductances in identified neurons are highly variable. This poses the crucial question of how such neurons can produce stable activity. Coexpression of ionic currents has been observed in an increasing number of neurons in different systems, suggesting that the coregulation of ionic channel expression, by thus linking their variability, may enable neurons to maintain relatively constant neuronal activity as suggested by a number of recent theoretical studies. We examine this hypothesis experimentally using the voltage- and dynamic-clamp techniques to first measure and then modify the ionic conductance levels of three currents in identified neurons of the crab pyloric network. We quantify activity by measuring 10 different attributes (oscillation period, spiking frequency, etc.), and find linear, positive and negative relationships between conductance pairs and triplets that can enable pyloric neurons to maintain activity attributes invariant. Consistent with experimental observations, some of the features most tightly regulated appear to be phase relationships of bursting activity. We conclude that covariation (and probably a tightly controlled coregulation) of ionic conductances can help neurons maintain certain attributes of neuronal activity invariant while at the same time allowing conductances to change over wide ranges in response to internal or environmental inputs and perturbations. Our results also show that neurons can tune neuronal activity globally via coordinate expression of ion currents.

  13. Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2010-01-01

    Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary la...... density from the ideal gas law versus real gas data. In both cases the effect was found to be negligible........ The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas......Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary...

  14. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.

  15. Galaxy Bias and Halo-Occupation Numbers from Large-Scale Clustering

    CERN Document Server

    Sefusatti, E; Sefusatti, Emiliano; Scoccimarro, Roman

    2005-01-01

    We show that current surveys have at least as much signal to noise in higher-order statistics as in the power spectrum at weakly nonlinear scales. We discuss how one can use this information to determine the mean of the galaxy halo occupation distribution (HOD) using only large-scale information, through galaxy bias parameters determined from the galaxy bispectrum and trispectrum. After introducing an averaged, reasonably fast to evaluate, trispectrum estimator, we show that the expected errors on linear and quadratic bias parameters can be reduced by at least 20-40%. Also, the inclusion of the trispectrum information, which is sensitive to "three-dimensionality" of structures, helps significantly in constraining the mass dependence of the HOD mean. Our approach depends only on adequate modeling of the abundance and large-scale clustering of halos and thus is independent of details of how galaxies are distributed within halos. This provides a consistency check on the traditional approach of using two-point st...

  16. Large Eddy Simulation of Airfoil Self-Noise at High Reynolds Number

    Science.gov (United States)

    Kocheemoolayil, Joseph; Lele, Sanjiva

    2015-11-01

    The trailing edge noise section (Category 1) of the Benchmark Problems for Airframe Noise Computations (BANC) workshop features five canonical problems. No first-principles based approach free of empiricism and tunable coefficients has successfully predicted trailing edge noise for the five configurations to date. Our simulations predict trailing edge noise accurately for all five configurations. The simulation database is described in detail, highlighting efforts undertaken to validate the results through systematic comparison with dedicated experiments and establish insensitivity to grid resolution, domain size, alleatory uncertainties such as the tripping mechanism used to force transition to turbulence and epistemic uncertainties such as models for unresolved near-wall turbulence. Ongoing efforts to extend the predictive capability to non-canonical configurations featuring flow separation are summarized. A novel, large-span calculation that predicts the flow past a wind turbine airfoil in deep stall with unprecedented accuracy is presented. The simulations predict airfoil noise in the near-stall regime accurately. While the post-stall noise predictions leave room for improvement, significant uncertainties in the experiment might preclude a fair comparison in this regime. We thank Cascade Technologies Inc. for providing access to the CharLES toolkit - a massively-parallel, unstructured large eddy simulation framework.

  17. Numerical analysis of jet impingement heat transfer at high jet Reynolds number and large temperature difference

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2013-01-01

    was investigated at a jet Reynolds number of 1.66 × 105 and a temperature difference between jet inlet and wall of 1600 K. The focus was on the convective heat transfer contribution as thermal radiation was not included in the investigation. A considerable influence of the turbulence intensity at the jet inlet......Jet impingement heat transfer from a round gas jet to a flat wall was investigated numerically for a ratio of 2 between the jet inlet to wall distance and the jet inlet diameter. The influence of turbulence intensity at the jet inlet and choice of turbulence model on the wall heat transfer...... was observed in the stagnation region, where the wall heat flux increased by a factor of almost 3 when increasing the turbulence intensity from 1.5% to 10%. The choice of turbulence model also influenced the heat transfer predictions significantly, especially in the stagnation region, where differences of up...

  18. Double large field stereoscopic PIV in a high Reynolds number turbulent boundary layer

    Science.gov (United States)

    Coudert, S.; Foucaut, J. M.; Kostas, J.; Stanislas, M.; Braud, P.; Fourment, C.; Delville, J.; Tutkun, M.; Mehdi, F.; Johansson, P.; George, W. K.

    2011-01-01

    An experiment on a flat plate turbulent boundary layer at high Reynolds number has been carried out in the Laboratoire de Mecanique de Lille (LML, UMR CNRS 8107) wind tunnel. This experiment was performed jointly with LEA (UMR CNRS 6609) in Poitiers (France) and Chalmers University of Technology (Sweden), in the frame of the WALLTURB European project. The simultaneous recording of 143 hot wires in one transverse plane and of two perpendicular stereoscopic PIV fields was performed successfully. The first SPIV plane is 1 cm upstream of the hot wire rake and the second is both orthogonal to the first one and to the wall. The first PIV results show a blockage effect which based on both statistical results (i.e. mean, RMS and spatial correlation) and a potential model does not seem to affect the turbulence organization.

  19. Driving large magnetic Reynolds number flow in highly ionized, unmagnetized plasmas

    Science.gov (United States)

    Weisberg, D. B.; Peterson, E.; Milhone, J.; Endrizzi, D.; Cooper, C.; Désangles, V.; Khalzov, I.; Siller, R.; Forest, C. B.

    2017-05-01

    Electrically driven, unmagnetized plasma flows have been generated in the Madison plasma dynamo experiment with magnetic Reynolds numbers exceeding the predicted Rmcrit = 200 threshold for flow-driven MHD instability excitation. The plasma flow is driven using ten thermally emissive lanthanum hexaboride cathodes which generate a J ×B torque in helium and argon plasmas. Detailed Mach probe measurements of plasma velocity for two flow topologies are presented: edge-localized drive using the multi-cusp boundary field and volumetric drive using an axial Helmholtz field. Radial velocity profiles show that the edge-driven flow is established via ion viscosity but is limited by a volumetric neutral drag force, and measurements of velocity shear compare favorably to the Braginskii transport theory. Volumetric flow drive is shown to produce larger velocity shear and has the correct flow profile for studying the magnetorotational instability.

  20. A regular Strouhal number for large-scale instability in the far wake of a rotor

    DEFF Research Database (Denmark)

    Okulov, Valery; Naumov, Igor V.; Mikkelsen, Robert Flemming;

    2014-01-01

    The flow behind a model of a wind turbine rotor is investigated experimentally in a water flume using particle image velocimetry (PIV) and laser Doppler anemometry (LDA). The study performed involves a three-bladed wind turbine rotor designed using the optimization technique of Glauert (Aerodynam...... visualizations and a reconstruction of the flow field using LDA and PIV measurements it is found that the wake dynamics is associated with a precession (rotation) of the helical vortex core....... Theory, vol. IV, 1935, pp. 169–360). The wake properties are studied for different tip speed ratios and free stream speeds. The data for the various rotor regimes show the existence of a regular Strouhal number associated with the development of an instability in the far wake of the rotor. From...

  1. Delayed entanglement echo for individual control of a large number of nuclear spins

    Science.gov (United States)

    Wang, Zhen-Yu; Casanova, Jorge; Plenio, Martin B.

    2017-03-01

    Methods to selectively detect and manipulate nuclear spins by single electrons of solid-state defects play a central role for quantum information processing and nanoscale nuclear magnetic resonance (NMR). However, with standard techniques, no more than eight nuclear spins have been resolved by a single defect centre. Here we develop a method that improves significantly the ability to detect, address and manipulate nuclear spins unambiguously and individually in a broad frequency band by using a nitrogen-vacancy (NV) centre as model system. On the basis of delayed entanglement control, a technique combining microwave and radio frequency fields, our method allows to selectively perform robust high-fidelity entangling gates between hardly resolved nuclear spins and the NV electron. Long-lived qubit memories can be naturally incorporated to our method for improved performance. The application of our ideas will increase the number of useful register qubits accessible to a defect centre and improve the signal of nanoscale NMR.

  2. Supersaturation calculation in large eddy simulation models for prediction of the droplet number concentration

    Directory of Open Access Journals (Sweden)

    O. Thouron

    2011-12-01

    Full Text Available A new parameterization scheme is described for calculation of supersaturation in LES models that specifically aims at the simulation of cloud condensation nuclei (CCN activation and prediction of the droplet number concentration. The scheme is tested against current parameterizations in the framework of the Meso-NH LES model. It is shown that the saturation adjustment scheme based on parameterizations of CCN activation in a convective updraft over estimates the droplet concentration in the cloud core while it cannot simulate cloud top supersaturation production due to mixing between cloudy and clear air. A supersaturation diagnostic scheme mitigates these artefacts by accounting for the presence of already condensed water in the cloud core but it is too sensitive to supersaturation fluctuations at cloud top and produces spurious CCN activation during cloud top mixing. The proposed pseudo-prognostic scheme shows performance similar to the diagnostic one in the cloud core but significantly mitigates CCN activation at cloud top.

  3. Activation process in excitable systems with multiple noise sources: Large number of units

    CERN Document Server

    Franović, Igor; Todorović, Kristina; Kostić, Srđan; Burić, Nikola

    2015-01-01

    We study the activation process in large assemblies of type II excitable units whose dynamics is influenced by two independent noise terms. The mean-field approach is applied to explicitly demonstrate that the assembly of excitable units can itself exhibit macroscopic excitable behavior. In order to facilitate the comparison between the excitable dynamics of a single unit and an assembly, we introduce three distinct formulations of the assembly activation event. Each formulation treats different aspects of the relevant phenomena, including the threshold-like behavior and the role of coherence of individual spikes. Statistical properties of the assembly activation process, such as the mean time-to-first pulse and the associated coefficient of variation, are found to be qualitatively analogous for all three formulations, as well as to resemble the results for a single unit. These analogies are shown to derive from the fact that global variables undergo a stochastic bifurcation from the stochastically stable fix...

  4. Large deviations of the limiting distribution in the Shanks-R\\'enyi prime number race

    CERN Document Server

    Lamzouri, Youness

    2011-01-01

    Let $q\\geq 3$, $2\\leq r\\leq \\phi(q)$ and $a_1,...,a_r$ be distinct residue classes modulo $q$ that are relatively prime to $q$. Assuming the Generalized Riemann Hypothesis and the Grand Simplicity Hypothesis, M. Rubinstein and P. Sarnak showed that the vector-valued function $E_{q;a_1,...,a_r}(x)=(E(x;q,a_1),..., E(x;q,a_r)),$ where $E(x;q,a)= \\frac{\\log x}{\\sqrt{x}}(\\phi(q)\\pi(x;q,a)-\\pi(x))$, has a limiting distribution $\\mu_{q;a_1,...,a_r}$ which is absolutely continuous on $\\mathbb{R}^r$. Under the same assumptions, we determine the asymptotic behavior of the large deviations $\\mu_{q;a_1,...,a_r}(||\\vx||>V)$ for different ranges of $V$, uniformly as $q\\to\\infty.$

  5. Strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree

    Institute of Scientific and Technical Information of China (English)

    HUANG HuiLin; YANG WeiGuo

    2008-01-01

    In this paper, we study the strong law of large numbers and Shannon-McMillan (S-M) theorem for Markov chains indexed by an infinite tree with uniformly bounded degree. The results generalize the analogous results on a homogeneous tree.

  6. Strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,we study the strong law of large numbers and Shannon-McMillan (S-M) theorem for Markov chains indexed by an infinite tree with uniformly bounded degree.The results generalize the analogous results on a homogeneous tree.

  7. [A Large Number of Circulating Tumor Cells(CTCs)Can Be Isolated from Samples Obtained by Using Leukapheresis Procedures].

    Science.gov (United States)

    Soya, Ryoko; Taguchi, Jyunichi; Nagakawa, Yuichi; Takahashi, Osamu; Sandoh, Norimasa; Hosokawa, Yuichi; Kasuya, Kazuhiko; Umeda, Naoki; Okamoto, Masato; Tsujitani, Shunichi; Tsuchida, Akihiko

    2015-09-01

    We hypothesized that a large number of circulating tumor cells(CTCs)may be isolated from samples obtained by using the leukapheresis procedures that are utilized to collect peripheral blood mononuclear cells for dendritic cell vaccine therapy. We utilized the CellSearch System to determine the number of CTCs in samples obtained by using leukapheresis in 7 patients with colorectal cancer, 5 patients with breast cancer, and 3 patients with gastric cancer. In all patients, a large number of CTCs were isolated. The mean number of CTCs per tumor was 17.1(range 10-34)in colorectal cancer, 10.0(range 2-27)in breast cancer, and 24.0(range 2-42)in gastric cancer. We succeeded in culturing the isolated CTCs from 7 patients with colorectal cancer, 5 patients with breast cancer, and 3 patients with gastric cancer. In conclusion, compared to conventional methods, a large number of CTCs can be obtained by using leukapheresis procedures. The molecular analyses of the CTCs isolated by using this method should be promising in the development of personalized cancer treatments.

  8. Instability of the interface in two-layer flows with large viscosity contrast at small Reynolds numbers

    Institute of Scientific and Technical Information of China (English)

    Jiebin Liu; Jifu Zhou

    2016-01-01

    The Kelvin–Helmholtz instability is believed to be the dominant instability mechanism for free shear flows at large Reynolds numbers. At small Reynolds numbers, a new instability mode is identified when the temporal instability of parallel viscous two fluid mixing layers is extended to current-fluid mud systems by considering a composite error function velocity profile. The new mode is caused by the large viscosity difference between the two fluids. This interfacial mode exists when the fluid mud boundary layer is sufficiently thin. Its performance is different from that of the Kelvin–Helmholtz mode. This mode has not yet been reported for interface instability problems with large viscosity contrasts. These results are essential for further stability analysis of flows relevant to the breaking up of this type of interface.

  9. KISCH / UL AND DURABLE DEVELOPMENT OF THE REGIONS THAT HAVE A LARGE NUMBER OF RELIGIOUS SETTLEMENTS

    Directory of Open Access Journals (Sweden)

    ENEA CONSTANTA

    2016-06-01

    Full Text Available We live in a world of contemporary kitsch, a world that merges authentic and false, good taste and meets often with bad taste. This phenomenon is găseseşte everywhere: in art, in literature cheap in media productions, shows, dialogues streets, in homes, in politics, in other words, in everyday life. Ksch site came directly in tourism, being identified in all forms of tourism worldwide, but especially religious tourism, pilgrimage with unexpected success in recent years. This paper makes an analysis of progressive evolution tourist traffic religion on the ability of the destination of religious tourism to remain competitive against all the problems, to attract visitors for their loyalty, to remain unique in terms of cultural and be a permanent balance with the environment, taking into account the environment religious phenomenon invaded Kisch, it disgraceful mixing dangerously with authentic spirituality. How trade, and rather Kisch's commercial components affect the environment, reflected in terms of religious tourism offer representatives highlighted based on a survey of major monastic ensembles in North Oltenia. Research objectives achieved in work followed, on the one hand the contributions and effects of the high number of visitors on the regions that hold religious sites, and on the other hand weighting and effects of commercial activity carried out in or near monastic establishments, be it genuine or kisck the respective regions. The study conducted took into account the northern region of Oltenia, and where demand for tourism is predominantly oriented exclusively practicing religious tourism

  10. Secondary organic aerosol formation from a large number of reactive man-made organic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Derwent, Richard G., E-mail: r.derwent@btopenworld.com [rdscientific, Newbury, Berkshire (United Kingdom); Jenkin, Michael E. [Atmospheric Chemistry Services, Okehampton, Devon (United Kingdom); Utembe, Steven R.; Shallcross, Dudley E. [School of Chemistry, University of Bristol, Bristol (United Kingdom); Murrells, Tim P.; Passant, Neil R. [AEA Environment and Energy, Harwell International Business Centre, Oxon (United Kingdom)

    2010-07-15

    A photochemical trajectory model has been used to examine the relative propensities of a wide variety of volatile organic compounds (VOCs) emitted by human activities to form secondary organic aerosol (SOA) under one set of highly idealised conditions representing northwest Europe. This study applied a detailed speciated VOC emission inventory and the Master Chemical Mechanism version 3.1 (MCM v3.1) gas phase chemistry, coupled with an optimised representation of gas-aerosol absorptive partitioning of 365 oxygenated chemical reaction product species. In all, SOA formation was estimated from the atmospheric oxidation of 113 emitted VOCs. A number of aromatic compounds, together with some alkanes and terpenes, showed significant propensities to form SOA. When these propensities were folded into a detailed speciated emission inventory, 15 organic compounds together accounted for 97% of the SOA formation potential of UK man made VOC emissions and 30 emission source categories accounted for 87% of this potential. After road transport and the chemical industry, SOA formation was dominated by the solvents sector which accounted for 28% of the SOA formation potential.

  11. Correlation of number of tumor buds and tumor stage in large bowel carcinomas

    Directory of Open Access Journals (Sweden)

    Đerković Branislav

    2016-01-01

    Full Text Available Standardized staging of tumors takes into account the depth of invasion of the intestinal wall and the presence of local or distant metastases, specifically focusing to precisely estimate length of patient survival. This assessment system does not fully reflect the biological behaviour of cancer individually, ie. tumor aggressiveness and ability of recurrence tumor after medical treatment. Furthermore, cancer at some patients have more aggressive growth than other carcinomas in the same clinical stage, because there are other parameters that determine the biological behaviors of colon cancer, which are not included in the standard classification of determining tumor stage. One of the recent arguments which are due in the spotlight is 'tumor budding', which represents one cell or group of up to five non-differentiated tumor cells, which are found in the stroma out of the invasive front line of cancer. There are 92 colon cancer and upper rectum processed, which are collected at General Hospital in Trebinje and Medical Center in Kosovska Mitrovica. The aim is to determine whether there is a correlation between the number of tumor budding and stage of tumors in colorectal cancer. The tumor stage is determined by Astler Coller classification. Investigation, based on x2-test, leads to the conclusion that there is not a statistical significance in tumor budding distribution in relation to tumor stage according to the Astler Coller classification (p = 0.383; p> 0.05.

  12. Hadron Resonance Gas Model for An Arbitrarily Large Number of Different Hard-Core Radii

    CERN Document Server

    Oliinychenko, D R; Sagun, V V; Ivanytskyi, A I; Yakimenko, I P; Nikonov, E G; Taranenko, A V; Zinovjev, G M

    2016-01-01

    We develop a novel formulation of the hadron-resonance gas model which, besides a hard-core repulsion, explicitly accounts for the surface tension induced by the interaction between the particles. Such an equation of state allows us to go beyond the Van der Waals approximation for any number of different hard-core radii. A comparison with the Carnahan-Starling equation of state shows that the new model is valid for packing fractions 0.2-0.22, while the usual Van der Waals model is inapplicable at packing fractions above 0.11-0.12. Moreover, it is shown that the equation of state with induced surface tension is softer than the one of hard spheres and remains causal at higher particle densities. The great advantage of our model is that there are only two equations to be solved and it does not depend on the various values of the hard-core radii used for different hadronic resonances. Using this novel equation of state we obtain a high-quality fit of the ALICE hadron multiplicities measured at center-of-mass ener...

  13. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Munoz, J., E-mail: jrm@correo.azc.uam.mx [Departamento de Energia, Universidad Autonoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico); Salinas-Rodriguez, E.; Soria, A. [Departamento de IPH, Universidad Autonoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340 Mexico D.F. (Mexico); Gama-Goicochea, A. [Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico)

    2011-07-15

    Graphical abstract: Display Omitted Highlights: > The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. > The leading bubble wake decreases the drag on the trailing bubble. > A new semi-analytical model for the trailing bubble's drag is presented. > The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 {<=} Re {<=} 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 {<=} Er {<=} 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  14. Beating the numbers through strategic intervention materials (SIMs): Innovative science teaching for large classes

    Science.gov (United States)

    Alboruto, Venus M.

    2017-05-01

    The study aimed to find out the effectiveness of using Strategic Intervention Materials (SIMs) as an innovative teaching practice in managing large Grade Eight Science classes to raise the performance of the students in terms of science process skills development and mastery of science concepts. Utilizing experimental research design with two groups of participants, which were purposefully chosen, it was obtained that there existed a significant difference in the performance of the experimental and control groups based on actual class observation and written tests on science process skills with a p-value of 0.0360 in favor of the experimental class. Further, results of written pre-test and post-test on science concepts showed that the experimental group with the mean of 24.325 (SD =3.82) performed better than the control group with the mean of 20.58 (SD =4.94), with a registered p-value of 0.00039. Therefore, the use of SIMs significantly contributed to the mastery of science concepts and the development of science process skills. Based on the findings, the following recommendations are offered: 1. that grade eight science teachers should use or adopt the SIMs used in this study to improve their students' performance; 2. training-workshop on developing SIMs must be conducted to help teachers develop SIMs to be used in their classes; 3. school administrators must allocate funds for the development and reproduction of SIMs to be used by the students in their school; and 4. every division should have a repository of SIMs for easy access of the teachers in the entire division.

  15. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    CERN Document Server

    Arencibia-Jorge, Ricardo; Chinchilla-Rodriguez, Zaida; Rousseau, Ronald; Paris, Soren W

    2009-01-01

    The current communication presents a simple exercise with the aim of solving a singular problem: the retrieval of extremely large amounts of items in the Web of Science interface. As it is known, Web of Science interface allows a user to obtain at most 100,000 items from a single query. But what about queries that achieve a result of more than 100,000 items? The exercise developed one possible way to achieve this objective. The case study is the retrieval of the entire scientific production from the United States in a specific year. Different sections of items were retrieved using the field Source of the database. Then, a simple Boolean statement was created with the aim of eliminating overlapping and to improve the accuracy of the search strategy. The importance of team work in the development of advanced search strategies was noted.

  16. Study on lattice Boltzmann method/large eddy simulation and its application at high Reynolds number flow

    Directory of Open Access Journals (Sweden)

    Haiqing Si

    2015-03-01

    Full Text Available Lattice Boltzmann method combined with large eddy simulation is developed in the article to simulate fluid flow at high Reynolds numbers. A subgrid model is used as a large eddy simulation model in the numerical simulation for high Reynolds flow. The idea of subgrid model is based on an assumption to include the physical effects that the unresolved motion has on the resolved fluid motion. It takes a simple form of eddy viscosity models for the Reynolds stress. Lift and drag evaluation in the lattice Boltzmann equation takes momentum-exchange method for curved body surface. First of all, the present numerical method is validated at low Reynolds numbers. Second, the developed lattice Boltzmann method/large eddy simulation method is performed to solve flow problems at high Reynolds numbers. Some detailed quantitative comparisons are implemented to show the effectiveness of the present method. It is demonstrated that lattice Boltzmann method combined with large eddy simulation model can efficiently simulate high Reynolds numbers’ flows.

  17. Precursors of extreme increments

    CERN Document Server

    Hallerberg, S; Holstein, D; Kantz, H; Hallerberg, Sarah; Altmann, Eduardo G.; Holstein, Detlef; Kantz, Holger

    2006-01-01

    We investigate precursors and predictability of extreme events in time series, which consist in large increments within successive time steps. In order to understand the predictability of this class of extreme events, we study analytically the prediction of extreme increments in AR(1)-processes. The resulting strategies are then applied to predict sudden increases in wind speed recordings. In both cases we evaluate the success of predictions via creating receiver operator characteristics (ROC-plots). Surprisingly, we obtain better ROC-plots for completely uncorrelated Gaussian random numbers than for AR(1)-correlated data. Furthermore, we observe an increase of predictability with increasing event size. Both effects can be understood by using the likelihood ratio as a summary index for smooth ROC-curves.

  18. Introducing a semi-automatic method to simulate large numbers of forensic fingermarks for research on fingerprint identification.

    Science.gov (United States)

    Rodriguez, Crystal M; de Jongh, Arent; Meuwly, Didier

    2012-03-01

    Statistical research on fingerprint identification and the testing of automated fingerprint identification system (AFIS) performances require large numbers of forensic fingermarks. These fingermarks are rarely available. This study presents a semi-automatic method to create simulated fingermarks in large quantities that model minutiae features or images of forensic fingermarks. This method takes into account several aspects contributing to the variability of forensic fingermarks such as the number of minutiae, the finger region, and the elastic deformation of the skin. To investigate the applicability of the simulated fingermarks, fingermarks have been simulated with 5-12 minutiae originating from different finger regions for six fingers. An AFIS matching algorithm was used to obtain similarity scores for comparisons between the minutiae configurations of fingerprints and the minutiae configurations of simulated and forensic fingermarks. The results showed similar scores for both types of fingermarks suggesting that the simulated fingermarks are good substitutes for forensic fingermarks.

  19. Aerodynamic Effects of High Turbulence Intensity on a Variable-Speed Power-Turbine Blade With Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At

  20. STRONG LAW OF LARGE NUMBERS AND ASYMPTOTIC EQUIPARTITION PROPERTY FOR NONSYMMETRIC MARKOV CHAIN FIELDS ON CAYLEY TREES

    Institute of Scientific and Technical Information of China (English)

    Bao Zhenhua; Ye Zhongxing

    2007-01-01

    Some strong laws of large numbers for the frequencies of occurrence of states and ordered couples of states for nonsymmetric Markov chain fields (NSMC) on Cayley trees are studied. In the proof, a new technique for the study of strong limit theorems of Markov chains is extended to the case of Markov chain fields. The asymptotic equipartition properties with almost everywhere (a.e.) convergence for NSMC on Cayley trees are obtained.

  1. Strong Law of Large Numbers for Countable Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degree

    Directory of Open Access Journals (Sweden)

    Bao Wang

    2014-01-01

    Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.

  2. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  3. Lack of topoisomerase copy number changes in patients with de novo and relapsed diffuse large B-cell lymphoma

    DEFF Research Database (Denmark)

    Pedersen, Mette Ø; Poulsen, Tim S; Gang, Anne O

    2015-01-01

    Topoisomerase (TOP) gene copy number changes may predict response to treatment with TOP-targeting drugs in cancer treatment. This was first described in patients with breast cancer and is currently being investigated in other malignant diseases. TOP-targeting drugs may induce TOP gene copy number...... changes at relapse, with possible implications for relapse therapy efficacy. TOP gene alterations in lymphoma are poorly investigated. In this study, TOP1 and TOP2A gene alterations were investigated in patients with de novo diffuse large B-cell lymphoma (DLBCL) (n = 33) and relapsed DLBCL treated...... with chemotherapy regimens including TOP2-targeting drugs (n = 16). No TOP1 or TOP2A copy number changes were found. Polysomy of chromosomes 20 and 17 was seen in 3 of 25 patients (12%) and 2 of 32 patients (6%) with de novo DLBCL. Among relapsed patients, chromosome polysomy was more frequently observed in 5 of 13...

  4. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2004-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continously increase the knowledge on wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describe the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of high-sampled full-scale time series measurements...... are consistent, given the inevitabel uncertainties associated with model as well as with the extreme value data analysis. Keywords: Statistical model, extreme wind conditions, statistical analysis, turbulence, wind loading, statistical analysis, turbulence, wind loading, wind shear, wind turbines....

  5. Changes of Frequency of Summer Precipitation Extremes over the Yangtze River in Association with Large-scale Oceanic-atmospheric Conditions

    Institute of Scientific and Technical Information of China (English)

    WANG Yi; YAN Zhongwei

    2011-01-01

    Changes of the frequency of precipitation extremes (the number of days with daily precipitation exceeding the 90th percentile of a daily climatology,referred to as R90N) in summer (June-August) over the mid-lower reaches of the Yangtze River arc analyzed based on daily observations during 1961-2007.The first singular value decomposition (SVD) mode of R90N is linked to an ENSO-like mode of the sea surface temperature anomalies (SSTA) in the previous winter.Responses of different grades of precipitation events to the climatic mode are compared.It is notable that the frequency of summer precipitation extremes is significantly related with the SSTA in the Pacific,while those of light and moderate precipitation are not.It is suggested that the previously well-recognized impact of ENSO on summer rainfall along the Yangtze River is essentially due to a response in summer precipitation extremes in the region,in association with the East Asia-Pacific (EAP) teleconnection pattern.A negative relationship is found between the East Asian Summer Monsoon (EASM) and precipitation extremes over the mid-lower reaches of the Yangtze River.In contrast,light rainfall processes are independent from the SST and EASM variations.

  6. Reynolds Number Versus Roughness Effects in the Princeton “Super-Pipe” Re-examined in the Context of Large Reynolds Number Asymptotics*

    Science.gov (United States)

    Nagib, Hassan; Monkewitz, Peter; Österlund, Jens; Christensen, Kenneth; Adrian, Ronald

    2001-11-01

    Tony Perry, et al. (J. Fluid Mech., v. 439, 2001) have recently contributed to the discussion concerning the reasons for systematic deviations with Re’s (Reynolds numbers) in the Princeton “Super-Pipe” data. Perry et al. demonstrate that the deviation of the constant within the “log-law” is compatible with the “Colebrook formula” for transitionally rough pipes. Since the experiments were completed, Lex Smits and the Princeton Group have argued that the pipe is smooth for at least the majority of the Re range. Here we show that the observed deviations are equally compatible with the finite Re effects obtained from a methodology based on matched asymptotic expansion techniques proposed by us (see abstract at this meeting), in which the infinite-Re limit of the “log-law”, as well as its correction for large but finite Re’s, are derived in a systematic manner. As argued by Perry et al., in these cases one cannot rely on the variation of the centerline velocity with Re to extract the log-law coefficients. The values of the “Karman constant” extracted using either interpretation is significantly lower than the 0.436 value originally proposed and it is closer to the value of 0.38 based on our recent work on boundary layers; see two publications by Österlund et al. (Phys. of Fluids, v. 12 no. 1 and no. 9, 2001). *Supported by NSF, AFOSR & ERCOFTAC.

  7. Simple and Fast Continuous Estimation Method of Respiratory Frequency During Sleep using the Number of Extreme Points of Heart Rate Time Series

    Science.gov (United States)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is reported that frequency component of approximately 0.25Hz of heart rate time series (RSA) is corresponding to the respiratory frequency. In this paper, we proposed that continuous estimation method of respiratory fequency during sleep using the number of extreme points of heart rate time series in real time. Equation for calculation of the method is very simple and the method can continuously calculate frequency by window width of about 18 beats. To evaluate accuracy of proposal method, RSA frequency was calculated using proposal method from the heart rate time series during supine rest. Result, minimum error rate was observed when RSA had time lag for about 11s and error rate was about 13.8%. Result of estimating RSA frequency time series during sleep, it varied regularly during non-REM and varied irregularly during REM. This result is similar as report of previous study about respiratory variability during sleep. Therefore, it is considered that proposal method possible to apply respiratory monitoring system during sleep.

  8. A short-term extremely low frequency electromagnetic field exposure increases circulating leukocyte numbers and affects HPA-axis signaling in mice.

    Science.gov (United States)

    de Kleijn, Stan; Ferwerda, Gerben; Wiese, Michelle; Trentelman, Jos; Cuppen, Jan; Kozicz, Tamas; de Jager, Linda; Hermans, Peter W M; Verburg-van Kemenade, B M Lidy

    2016-10-01

    There is still uncertainty whether extremely low frequency electromagnetic fields (ELF-EMF) can induce health effects like immunomodulation. Despite evidence obtained in vitro, an unambiguous association has not yet been established in vivo. Here, mice were exposed to ELF-EMF for 1, 4, and 24 h/day in a short-term (1 week) and long-term (15 weeks) set-up to investigate whole body effects on the level of stress regulation and immune response. ELF-EMF signal contained multiple frequencies (20-5000 Hz) and a magnetic flux density of 10 μT. After exposure, blood was analyzed for leukocyte numbers (short-term and long-term) and adrenocorticotropic hormone concentration (short-term only). Furthermore, in the short-term experiment, stress-related parameters, corticotropin-releasing hormone, proopiomelanocortin (POMC) and CYP11A1 gene-expression, respectively, were determined in the hypothalamic paraventricular nucleus, pituitary, and adrenal glands. In the short-term but not long-term experiment, leukocyte counts were significantly higher in the 24 h-exposed group compared with controls, mainly represented by increased neutrophils and CD4 ± lymphocytes. POMC expression and plasma adrenocorticotropic hormone were significantly lower compared with unexposed control mice. In conclusion, short-term ELF-EMF exposure may affect hypothalamic-pituitary-adrenal axis activation in mice. Changes in stress hormone release may explain changes in circulating leukocyte numbers and composition. Bioelectromagnetics. 37:433-443, 2016. © 2016 The Authors. Bioelectromagnetics Published by Wiley Periodicals, Inc.

  9. Ultrabright fluorescent silica particles with a large number of complex spectra excited with a single wavelength for multiplex applications.

    Science.gov (United States)

    Palantavida, S; Peng, B; Sokolov, I

    2017-02-08

    We report on a novel approach to synthesize ultrabright fluorescent silica particles capable of producing a large number of complex spectra. The spectra can be excited using a single wavelength which is paramount in quantitative fluorescence imaging, flow cytometry and sensing applications. The approach employs the physical encapsulation of organic fluorescent molecules inside a nanoporous silica matrix with no dye leakage. As was recently demonstrated, such an encapsulation allowed for the encapsulation of very high concentrations of organic dyes without quenching their fluorescent efficiency. As a result, dye molecules are distanced within ∼5 nm from each other; it theoretically allows for efficient exchange of excitation energy via Förster resonance energy transfer (FRET). Here we present the first experimental demonstration of the encapsulation of fluorescent dyes in the FRET sequence. Attaining a FRET sequence of up to five different dyes is presented. The number of distinguishable spectra can be further increased by using different relative concentrations of encapsulated dyes. Combining these approaches allows for creating a large number of ultrabright fluorescent particles with substantially different fluorescence spectra. We also demonstrate the utilization of these particles for potential multiplexing applications. Though fluorescence spectra of the obtained multiplex probes are typically overlapping, they can be distinguished by using standard linear decomposition algorithms.

  10. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices.

    Science.gov (United States)

    Park, KeeHyun; Lim, SeungHyeon

    2015-01-01

    In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  11. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices

    Directory of Open Access Journals (Sweden)

    KeeHyun Park

    2015-01-01

    Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  12. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  13. Highly sensitive visible-blind extreme ultraviolet Ni/4H-SiC Schottky photodiodes with large detection area.

    Science.gov (United States)

    Hu, Jun; Xin, Xiaobin; Zhao, Jian H; Yan, Feng; Guan, Bing; Seely, John; Kjornrattanawanich, Benjawan

    2006-06-01

    Ni/4H-SiC Schottky photodiodes of 5 mm x 5 mm area have been fabricated and characterized. The photodiodes show less than 0.1 pA dark current at -4 V and an ideality factor of 1.06. A quantum efficiency (QE) between 3 and 400 nm has been calibrated and compared with Si photodiodes optimized for extreme ultraviolet (EUV) detection. In the EUV region, the QE of SiC detectors increases from 0.14 electrons/photon at 120 nm to 30 electrons/photon at 3 nm. The mean energy of electron-hole pair generation of 4H-SiC estimated from the spectral QE is found to be 7.9 eV.

  14. The 3D MHD code GOEMHD3 for astrophysical plasmas with large Reynolds numbers. Code description, verification, and computational performance

    Science.gov (United States)

    Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.

    2015-08-01

    Context. The numerical simulation of turbulence and flows in almost ideal astrophysical plasmas with large Reynolds numbers motivates the implementation of magnetohydrodynamical (MHD) computer codes with low resistivity. They need to be computationally efficient and scale well with large numbers of CPU cores, allow obtaining a high grid resolution over large simulation domains, and be easily and modularly extensible, for instance, to new initial and boundary conditions. Aims: Our aims are the implementation, optimization, and verification of a computationally efficient, highly scalable, and easily extensible low-dissipative MHD simulation code for the numerical investigation of the dynamics of astrophysical plasmas with large Reynolds numbers in three dimensions (3D). Methods: The new GOEMHD3 code discretizes the ideal part of the MHD equations using a fast and efficient leap-frog scheme that is second-order accurate in space and time and whose initial and boundary conditions can easily be modified. For the investigation of diffusive and dissipative processes the corresponding terms are discretized by a DuFort-Frankel scheme. To always fulfill the Courant-Friedrichs-Lewy stability criterion, the time step of the code is adapted dynamically. Numerically induced local oscillations are suppressed by explicit, externally controlled diffusion terms. Non-equidistant grids are implemented, which enhance the spatial resolution, where needed. GOEMHD3 is parallelized based on the hybrid MPI-OpenMP programing paradigm, adopting a standard two-dimensional domain-decomposition approach. Results: The ideal part of the equation solver is verified by performing numerical tests of the evolution of the well-understood Kelvin-Helmholtz instability and of Orszag-Tang vortices. The accuracy of solving the (resistive) induction equation is tested by simulating the decay of a cylindrical current column. Furthermore, we show that the computational performance of the code scales very

  15. The halophilic alkalithermophile Natranaerobius thermophilus adapts to multiple environmental extremes using a large repertoire of Na(K)/H antiporters.

    Science.gov (United States)

    Mesbah, Noha M; Cook, Gregory M; Wiegel, Juergen

    2009-10-01

    Natranaerobius thermophilus is an unusual extremophile because it is halophilic, alkaliphilic and thermophilic, growing optimally at 3.5 M Na(+), pH(55 degrees C) 9.5 and 53 degrees C. Mechanisms enabling this tripartite lifestyle are essential for understanding how microorganisms grow under inhospitable conditions, but remain unknown, particularly in extremophiles growing under multiple extremes. We report on the response of N. thermophilus to external pH at high salt and elevated temperature and identify mechanisms responsible for this adaptation. N. thermophilus exhibited cytoplasm acidification, maintaining an unanticipated transmembrane pH gradient of 1 unit over the entire extracellular pH range for growth. N. thermophilus uses two distinct mechanisms for cytoplasm acidification. At extracellular pH values at and below the optimum, N. thermophilus utilizes at least eight electrogenic Na(+)(K(+))/H(+) antiporters for cytoplasm acidification. Characterization of these antiporters in antiporter-deficient Escherichia coli KNabc showed overlapping pH profiles (pH 7.8-10.0) and Na(+) concentrations for activity (K(0.5) values 1.0-4.4 mM), properties that correlate with intracellular conditions of N. thermophilus. As the extracellular pH increases beyond the optimum, electrogenic antiport activity ceases, and cytoplasm acidification is achieved by energy-independent physiochemical effects (cytoplasmic buffering) potentially mediated by an acidic proteome. The combination of these strategies allows N. thermophilus to grow over a range of extracellular pH and Na(+) concentrations and protect biomolecules under multiple extreme conditions.

  16. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.

    2005-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continuously increase the knowledge of wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describes the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of full-scale measurements recorded with a high sampling rate...

  17. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence

    Science.gov (United States)

    Dogan, Eda; Hearst, R. Jason; Ganapathisubramani, Bharathram

    2017-03-01

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to `simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.

  18. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    Science.gov (United States)

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  19. Arrangement of scale-interaction and large-scale modulation in high Reynolds number turbulent boundary layers

    Science.gov (United States)

    Baars, Woutijn J.; Hutchins, Nicholas; Marusic, Ivan

    2015-11-01

    Interactions between small- and large-scale motions are inherent in the near-wall dynamics of wall-bounded flows. We here examine the scale-interaction embedded within the streamwise velocity component. Data were acquired using hot-wire anemometry in ZPG turbulent boundary layers, for Reynolds numbers ranging from Reτ ≡ δUτ / ν ~ 2800 to 22800. After first decomposing velocity signals into contributions from small- and large-scales, we then represent the time-varying small-scale energy with time series of its instantaneous amplitude and instantaneous frequency, via a wavelet-based method. Features of the scale-interaction are inferred from isocorrelation maps, formed by correlating the large-scale velocity with its concurrent small-scale amplitude and frequency. Below the onset of the log-region, the physics constitutes aspects of amplitude modulation and frequency modulation. Time shifts, associated with the correlation extrema--representing the lead/lag of the small-scale signatures relative to the large-scales--are shown to be governed by inner-scaling. Wall-normal trends of time shifts are explained by considering the arrangement of scales in the log- and intermittent-regions, and how they relate to stochastic top-down and bottom-up processes.

  20. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages.

  1. Large scale motions of multiple limit-cycle high Reynolds number annular and toroidal rotor/stator cavities

    Science.gov (United States)

    Bridel-Bertomeu, Thibault; Gicquel, L. Y. M.; Staffelbach, G.

    2017-06-01

    Rotating cavity flows are essential components of industrial applications but their dynamics are still not fully understood when it comes to the relation between the fluid organization and monitored pressure fluctuations. From computer hard-drives to turbo-pumps of space launchers, designed devices often produce flow oscillations that can either destroy the component prematurely or produce too much noise. In such a context, large scale dynamics of high Reynolds number rotor/stator cavities need better understanding especially at the flow limit-cycle or associated statistically stationary state. In particular, the influence of curvature as well as cavity aspect ratio on the large scale organization and flow stability at a fixed rotating disc Reynolds number is fundamental. To probe such flows, wall-resolved large eddy simulation is applied to two different rotor/stator cylindrical cavities and one annular cavity. Validation of the predictions proves the method to be suited and to capture the disc boundary layer patterns reported in the literature. It is then shown that in complement to these disc boundary layer analyses, at the limit-cycle the rotating flows exhibit characteristic patterns at mid-height in the homogeneous core pointing the importance of large scale features. Indeed, dynamic modal decomposition reveals that the entire flow dynamics are driven by only a handful of atomic modes whose combination links the oscillatory patterns observed in the boundary layers as well as in the core of the cavity. These fluctuations form macro-structures, born in the unstable stator boundary layer and extending through the homogeneous inviscid core to the rotating disc boundary layer, causing its instability under some conditions. More importantly, the macro-structures significantly differ depending on the configuration pointing the need for deeper understanding of the influence of geometrical parameters as well as operating conditions.

  2. The relationship between purely stochastic sampling error and the number of technical replicates used to estimate concentration at an extreme dilution.

    Science.gov (United States)

    Irwin, Peter L; Nguyen, Ly-Huong T; Chen, Chin-Yi

    2010-09-01

    For any analytical system the population mean (μ) number of entities (e.g., cells or molecules) per tested volume, surface area, or mass also defines the population standard deviation (σ = square root(μ)). For a preponderance of analytical methods, σ is very small relative to μ due to their large limit of detection (>10(2) per volume). However, in theory at least, DNA-based detection methods (real-time, quantitative or qPCR) can detect ≈ 1 DNA molecule per tested volume (i.e., μ ≈ 1) whereupon errors of random sampling can cause sample means (mean) to substantially deviate from μ if the number of samplings (n), or "technical replicates", per observation is too small. In this work the behaviors of two measures of sampling error (each replicated fivefold) are examined under the influence of n. For all data (μ = 1.25, 2.5, 5, 7.5, 10, and 20) a large sample of individual analytical counts (x) were created and randomly assigned into N integral-valued sub-samples each containing between 2 and 50 repeats (n) whereupon N × n = 322 to 361. From these data the average μ-normalized deviation of σ from each sub-sample's standard deviation estimate (s(j), j = 1 to N; N = 7 [n = 50] to 180 [n = 2]) was calculated (Δ). Alternatively, the average μ-normalized deviation of μ from each sub-sample's mean estimate (mean(j)) was also evaluated (Δ'). It was found that both of these empirical measures of sampling error were proportional to ⁻²√n . μ. Derivative (∂/∂n · Δ or Δ') analyses of our results indicate that a large number of samplings (n ≈ 33 +/- 3.1) are requisite to achieve a nominal sampling error for samples with a μ ≈ 1. This result argues that pathogen detection is most economically performed, even using highly sensitive techniques such as qPCR, when some form of organism cultural enrichment is utilized and which results in a binomial response. Thus, using a specific gene PCR-based (+ or -) most probable number (MPN

  3. A genome-wide association study in large white and landrace pig populations for number piglets born alive.

    Science.gov (United States)

    Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst

    2015-01-01

    The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.

  4. Large-System Analysis of Multiuser Detection with an Unknown Number of Users: A High-SNR Approach

    CERN Document Server

    Campo, Adrià Tauste; Biglieri, Ezio

    2010-01-01

    We analyze multiuser detection under the assumption that the number of users accessing the channel is unknown by the receiver. In this environment, users' activity must be estimated along with any other parameters such as data, power, and location. Our main goal is to determine the performance loss caused by the need for estimating the identities of active users, which are not known a priori. To prevent a loss of optimality, we assume that identities and data are estimated jointly, rather than in two separate steps. We examine the performance of multiuser detectors when the number of potential users is large. Statistical-physics methodologies are used to determine the macroscopic performance of the detector in terms of its multiuser efficiency. Special attention is paid to the fixed-point equation whose solution yields the multiuser efficiency of the optimal (maximum a posteriori) detector in the large signal-to-noise ratio regime. Our analysis yields closed-form approximate bounds to the minimum mean-squared...

  5. A genome-wide association study in large white and landrace pig populations for number piglets born alive.

    Directory of Open Access Journals (Sweden)

    Sarah Bergfelder-Drüing

    Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.

  6. Research on beam characteristics in a large-Fresnel-number unstable-waveguide hybrid resonator with parabolic mirrors.

    Science.gov (United States)

    Wang, Wei; Qin, Yingxiong; Xiao, Yu; Zhong, Lijing; Wu, Chao; Wang, Zhen; Wan, Wen; Tang, Xiahui

    2016-07-20

    Large-Fresnel-number unstable-waveguide hybrid resonators employing spherical resonator mirrors suffer from spherical aberration, which adversely affects beam quality and alignment sensitivity. In this paper, we present experimental and numerical wave-optics simulations of the beam characteristics of a negative-branch hybrid resonator having parabolic mirrors with a large equivalent Fresnel number in the unstable direction. These results are compared with a resonator using spherical mirrors. Using parabolic mirrors, the output beam has a smaller beam spot size and higher power density at the focal plane. We found that the power extraction efficiency is 3.5% higher when compared with a resonator using spherical mirrors as the cavity length was varied between -1 and 1 mm from the ideal confocal resonator. In addition, the power extraction efficiency is 5.6% higher for mirror tilt angles varied between -6 and 6 mrad. Furthermore, the output propagating field is similar to a converging wave for a spherical mirror resonator and the output beam direction deviates 3.5 mrad from the optical axis. The simulation results are in good agreement with the experimental results.

  7. Aerodynamic Effects of Turbulence Intensity on a Variable-Speed Power-Turbine Blade with Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie Brynn; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The high turbulence study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Downstream total pressure and exit angle data were acquired for ten incidence angles ranging from +15.8 to 51.0. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12105 to 2.12106 and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 0.25 - 0.4 for the low Tu tests and 8- 15 for the high Tu study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitchyaw probe located in a survey plane 7 axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At the extreme positive and negative incidence angles, the data show substantial differences in the exit flow field. These differences are attributable to both the higher inlet Tu directly and to the thinner inlet endwall

  8. Pinpointing Needles in Giant Haystacks: Use of Text Mining to Reduce Impractical Screening Workload in Extremely Large Scoping Reviews

    Science.gov (United States)

    Shemilt, Ian; Simon, Antonia; Hollands, Gareth J.; Marteau, Theresa M.; Ogilvie, David; O'Mara-Eves, Alison; Kelly, Michael P.; Thomas, James

    2014-01-01

    In scoping reviews, boundaries of relevant evidence may be initially fuzzy, with refined conceptual understanding of interventions and their proposed mechanisms of action an intended output of the scoping process rather than its starting point. Electronic searches are therefore sensitive, often retrieving very large record sets that are…

  9. Pinpointing Needles in Giant Haystacks: Use of Text Mining to Reduce Impractical Screening Workload in Extremely Large Scoping Reviews

    Science.gov (United States)

    Shemilt, Ian; Simon, Antonia; Hollands, Gareth J.; Marteau, Theresa M.; Ogilvie, David; O'Mara-Eves, Alison; Kelly, Michael P.; Thomas, James

    2014-01-01

    In scoping reviews, boundaries of relevant evidence may be initially fuzzy, with refined conceptual understanding of interventions and their proposed mechanisms of action an intended output of the scoping process rather than its starting point. Electronic searches are therefore sensitive, often retrieving very large record sets that are…

  10. Factors associated with self-reported number of teeth in a large national cohort of Thai adults

    Directory of Open Access Journals (Sweden)

    Yiengprugsawan Vasoontara

    2011-11-01

    Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.

  11. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  12. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  13. How extreme are extremes?

    Science.gov (United States)

    Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro

    2016-04-01

    High temperatures have an impact on the energy balance of any living organism and on the operational capabilities of critical infrastructures. Heat-wave indicators have been mainly developed with the aim of capturing the potential impacts on specific sectors (agriculture, health, wildfires, transport, power generation and distribution). However, the ability to capture the occurrence of extreme temperature events is an essential property of a multi-hazard extreme climate indicator. Aim of this study is to develop a standardized heat-wave indicator, that can be combined with other indices in order to describe multiple hazards in a single indicator. The proposed approach can be used in order to have a quantified indicator of the strenght of a certain extreme. As a matter of fact, extremes are usually distributed in exponential or exponential-exponential functions and it is difficult to quickly asses how strong was an extreme events considering only its magnitude. The proposed approach simplify the quantitative and qualitative communication of extreme magnitude

  14. Lack of topoisomerase copy number changes in patients with de novo and relapsed diffuse large B-cell lymphoma.

    Science.gov (United States)

    Pedersen, Mette Ø; Poulsen, Tim S; Gang, Anne O; Knudsen, Helle; Lauritzen, Anne F; Pedersen, Michael; Nielsen, Signe L; Brown, Peter; Høgdall, Estrid; Nørgaard, Peter

    2015-07-01

    Topoisomerase (TOP) gene copy number changes may predict response to treatment with TOP-targeting drugs in cancer treatment. This was first described in patients with breast cancer and is currently being investigated in other malignant diseases. TOP-targeting drugs may induce TOP gene copy number changes at relapse, with possible implications for relapse therapy efficacy. TOP gene alterations in lymphoma are poorly investigated. In this study, TOP1 and TOP2A gene alterations were investigated in patients with de novo diffuse large B-cell lymphoma (DLBCL) (n = 33) and relapsed DLBCL treated with chemotherapy regimens including TOP2-targeting drugs (n = 16). No TOP1 or TOP2A copy number changes were found. Polysomy of chromosomes 20 and 17 was seen in 3 of 25 patients (12%) and 2 of 32 patients (6%) with de novo DLBCL. Among relapsed patients, chromosome polysomy was more frequently observed in 5 of 13 patients (38%) and 4 of 16 patients (25%) harboring chromosome 20 and 17 polysomy, respectively; however, these differences only tended to be significant (p = 0.09 and p = 0.09, respectively). The results suggest that TOP gene copy number changes are very infrequent in DLBCL and not likely induced by TOP2-targeting drugs. Increased polyploidy of chromosomes 17 and 20 among patients with relapsed DLBCL may reflect genetic compensation in the tumor cells after TOP2 inhibition, but is more likely due to the increased genetic instability often seen in progressed cancers. Therefore, it is unlikely that TOP1 and TOP2A gene alterations can be used as predictive markers for response to treatment with TOP2-targeting drugs in patients with DLBCL.

  15. Large-scale flow and Reynolds numbers in the presence of boiling in locally heated turbulent convection

    Science.gov (United States)

    Hoefnagels, Paul B. J.; Wei, Ping; Narezo Guzman, Daniela; Sun, Chao; Lohse, Detlef; Ahlers, Guenter

    2017-07-01

    We report on an experimental study of the large-scale flow (LSF) and Reynolds numbers in turbulent convection in a cylindrical sample with height equal to its diameter and heated locally around the center of its bottom plate (locally heated convection). The sample size and shape are the same as those of Narezo Guzman et al. [D. Narezo Guzman et al., J. Fluid Mech. 787, 331 (2015), 10.1017/jfm.2015.701; D. Narezo Guzman et al., J. Fluid Mech. 795, 60 (2016), 10.1017/jfm.2016.178]. Measurements are made at a nearly constant Rayleigh number as a function of the mean temperature, both in the presence of controlled boiling (two-phase flow) and for the superheated fluid (one-phase flow). Superheat values Tb-To n up to about 11 K (Tb is the bottom-plate temperature and To n is the lowest Tb at which boiling is observed) are used. The LSF is less organized than it is in (uniformly heated) Rayleigh-Bénard convection (RBC), where it takes the form of a single convection roll. Large-scale-flow-induced sinusoidal azimuthal temperature variations (like those found for RBC) could be detected only in the lower portion of the sample, indicating a less organized flow in the upper portions. Reynolds numbers are determined using the elliptic model (EM) of He and Zhang [G.-W. He and J.-B. Zhang, Phys. Rev. E 73, 055303(R) (2006), 10.1103/PhysRevE.73.055303]. We found that for our system the EM is applicable over a wide range of space and time displacements, as long as these displacements are within the inertial range of the temporal and spatial spectrum. At three locations in the sample the results show that the vertical mean-flow velocity component is reduced while the fluctuation velocity is enhanced by the bubbles of the two-phase flow. Enhancements of velocity fluctuations up to about 60% are found at the largest superheat values. Local temperature measurements within the sample reveal temperature oscillations that also used to determine a Reynolds number. These results are

  16. SWAP OBSERVATIONS OF THE LONG-TERM, LARGE-SCALE EVOLUTION OF THE EXTREME-ULTRAVIOLET SOLAR CORONA

    Energy Technology Data Exchange (ETDEWEB)

    Seaton, Daniel B.; De Groof, Anik; Berghmans, David; Nicula, Bogdan [Royal Observatory of Belgium-SIDC, Avenue Circulaire 3, B-1180 Brussels (Belgium); Shearer, Paul [Department of Mathematics, 2074 East Hall, University of Michigan, 530 Church Street, Ann Arbor, MI 48109-1043 (United States)

    2013-11-01

    The Sun Watcher with Active Pixels and Image Processing (SWAP) EUV solar telescope on board the Project for On-Board Autonomy 2 spacecraft has been regularly observing the solar corona in a bandpass near 17.4 nm since 2010 February. With a field of view of 54 × 54 arcmin, SWAP provides the widest-field images of the EUV corona available from the perspective of the Earth. By carefully processing and combining multiple SWAP images, it is possible to produce low-noise composites that reveal the structure of the EUV corona to relatively large heights. A particularly important step in this processing was to remove instrumental stray light from the images by determining and deconvolving SWAP's point-spread function from the observations. In this paper, we use the resulting images to conduct the first-ever study of the evolution of the large-scale structure of the corona observed in the EUV over a three year period that includes the complete rise phase of solar cycle 24. Of particular note is the persistence over many solar rotations of bright, diffuse features composed of open magnetic fields that overlie polar crown filaments and extend to large heights above the solar surface. These features appear to be related to coronal fans, which have previously been observed in white-light coronagraph images and, at low heights, in the EUV. We also discuss the evolution of the corona at different heights above the solar surface and the evolution of the corona over the course of the solar cycle by hemisphere.

  17. A theoretical model of a wake of a body towed in a stratified fluid at large Reynolds and Froude numbers

    Directory of Open Access Journals (Sweden)

    Y. I. Troitskaya

    2006-01-01

    Full Text Available The objective of the present paper is to develop a theoretical model describing the evolution of a turbulent wake behind a towed sphere in a stably stratified fluid at large Froude and Reynolds numbers. The wake flow is considered as a quasi two-dimensional (2-D turbulent jet flow whose dynamics is governed by the momentum transfer from the mean flow to a quasi-2-D sinuous mode growing due to hydrodynamic instability. The model employs a quasi-linear approximation to describe this momentum transfer. The model scaling coefficients are defined with the use of available experimental data, and the performance of the model is verified by comparison with the results of a direct numerical simulation of a 2-D turbulent jet flow. The model prediction for the temporal development of the wake axis mean velocity is found to be in good agreement with the experimental data obtained by Spedding (1997.

  18. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    motion that interacts with the structure above the bottom floors. As in a recent work by the authors the paper is about application of so-called Slepian model simulation, but in this paper supplemented by a simplification principle that allows a manageable calculation for the considered type of elasto......The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...

  19. Evaluation of two sweeping methods for estimating the number of immature Aedes aegypti (Diptera: Culicidae in large containers

    Directory of Open Access Journals (Sweden)

    Margareth Regina Dibo

    2013-07-01

    Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

  20. CrossRef Large numbers of cold positronium atoms created in laser-selected Rydberg states using resonant charge exchange

    CERN Document Server

    McConnell, R; Kolthammer, WS; Richerme, P; Müllers, A; Walz, J; Grzonka, D; Zielinski, M; Fitzakerley, D; George, MC; Hessels, EA; Storry, CH; Weel, M

    2016-01-01

    Lasers are used to control the production of highly excited positronium atoms (Ps*). The laser light excites Cs atoms to Rydberg states that have a large cross section for resonant charge-exchange collisions with cold trapped positrons. For each trial with 30 million trapped positrons, more than 700 000 of the created Ps* have trajectories near the axis of the apparatus, and are detected using Stark ionization. This number of Ps* is 500 times higher than realized in an earlier proof-of-principle demonstration (2004 Phys. Lett. B 597 257). A second charge exchange of these near-axis Ps* with trapped antiprotons could be used to produce cold antihydrogen, and this antihydrogen production is expected to be increased by a similar factor.

  1. Large number of rebounding/founder HIV variants emerge from multifocal infection in lymphatic tissues after treatment interruption.

    Science.gov (United States)

    Rothenberger, Meghan K; Keele, Brandon F; Wietgrefe, Stephen W; Fletcher, Courtney V; Beilman, Gregory J; Chipman, Jeffrey G; Khoruts, Alexander; Estes, Jacob D; Anderson, Jodi; Callisto, Samuel P; Schmidt, Thomas E; Thorkelson, Ann; Reilly, Cavan; Perkey, Katherine; Reimann, Thomas G; Utay, Netanya S; Nganou Makamdop, Krystelle; Stevenson, Mario; Douek, Daniel C; Haase, Ashley T; Schacker, Timothy W

    2015-03-10

    Antiretroviral therapy (ART) suppresses HIV replication in most individuals but cannot eradicate latently infected cells established before ART was initiated. Thus, infection rebounds when treatment is interrupted by reactivation of virus production from this reservoir. Currently, one or a few latently infected resting memory CD4 T cells are thought be the principal source of recrudescent infection, but this estimate is based on peripheral blood rather than lymphoid tissues (LTs), the principal sites of virus production and persistence before initiating ART. We, therefore, examined lymph node (LN) and gut-associated lymphoid tissue (GALT) biopsies from fully suppressed subjects, interrupted therapy, monitored plasma viral load (pVL), and repeated biopsies on 12 individuals as soon as pVL became detectable. Isolated HIV RNA-positive (vRNA+) cells were detected by in situ hybridization in LTs obtained before interruption in several patients. After interruption, multiple foci of vRNA+ cells were detected in 6 of 12 individuals as soon as pVL was measureable and in some subjects, in more than one anatomic site. Minimal estimates of the number of rebounding/founder (R/F) variants were determined by single-gene amplification and sequencing of viral RNA or DNA from peripheral blood mononuclear cells and plasma obtained at or just before viral recrudescence. Sequence analysis revealed a large number of R/F viruses representing recrudescent viremia from multiple sources. Together, these findings are consistent with the origins of recrudescent infection by reactivation from many latently infected cells at multiple sites. The inferred large pool of cells and sites to rekindle recrudescent infection highlights the challenges in eradicating HIV.

  2. A Theory of Evolving Natural Constants Based on the Unification of General Theory of Relativity and Dirac's Large Number Hypothesis

    Institute of Scientific and Technical Information of China (English)

    PENG Huan-Wu

    2005-01-01

    Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun.Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004)703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h as well as Boltzmann's kB by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.

  3. A very large number of GABAergic neurons are activated in the tuberal hypothalamus during paradoxical (REM sleep hypersomnia.

    Directory of Open Access Journals (Sweden)

    Emilie Sapin

    Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.

  4. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    Energy Technology Data Exchange (ETDEWEB)

    Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)

    2011-12-15

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)

  5. Drug testing and flow cytometry analysis on a large number of uniform sized tumor spheroids using a microfluidic device

    Science.gov (United States)

    Patra, Bishnubrata; Peng, Chien-Chung; Liao, Wei-Hao; Lee, Chau-Hwang; Tung, Yi-Chung

    2016-02-01

    Three-dimensional (3D) tumor spheroid possesses great potential as an in vitro model to improve predictive capacity for pre-clinical drug testing. In this paper, we combine advantages of flow cytometry and microfluidics to perform drug testing and analysis on a large number (5000) of uniform sized tumor spheroids. The spheroids are formed, cultured, and treated with drugs inside a microfluidic device. The spheroids can then be harvested from the device without tedious operation. Due to the ample cell numbers, the spheroids can be dissociated into single cells for flow cytometry analysis. Flow cytometry provides statistical information in single cell resolution that makes it feasible to better investigate drug functions on the cells in more in vivo-like 3D formation. In the experiments, human hepatocellular carcinoma cells (HepG2) are exploited to form tumor spheroids within the microfluidic device, and three anti-cancer drugs: Cisplatin, Resveratrol, and Tirapazamine (TPZ), and their combinations are tested on the tumor spheroids with two different sizes. The experimental results suggest the cell culture format (2D monolayer vs. 3D spheroid) and spheroid size play critical roles in drug responses, and also demonstrate the advantages of bridging the two techniques in pharmaceutical drug screening applications.

  6. Strength in numbers: large and permanent colonies have higher queen oviposition rates in the invasive Argentine ant (Linepithema humile, Mayr).

    Science.gov (United States)

    Abril, Sílvia; Gómez, Crisanto

    2014-03-01

    Polydomy associated with unicoloniality is a common trait of invasive species. In the invasive Argentine ant, colonies are seasonally polydomous. Most follow a seasonal fission-fussion pattern: they disperse in the spring and summer and aggregate in the fall and winter. However, a small proportion of colonies do not migrate; instead, they inhabit permanent nesting sites. These colonies are large and highly polydomous. The aim of this study was to (1) search for differences in the fecundity of queens between mother colonies (large and permanent) and satellite colonies (small and temporal), (2) determine if queens in mother and satellite colonies have different diets to clarify if colony size influences social organization and queen feeding, and (3) examine if colony location relative to the invasion front results in differences in the queen's diet. Our results indicate that queens from mother nests are more fertile than queens from satellite nests and that colony location does not affect queen oviposition rate. Ovarian dissections suggest that differences in ovarian morphology are not responsible for the higher queen oviposition rate in mother vs. satellite nests, since there were no differences in the number and length of ovarioles in queens from the two types of colonies. In contrast, the higher δ(15)N values of queens from mother nests imply that greater carnivorous source intake accounts for the higher oviposition rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    Science.gov (United States)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  8. An extreme breaching of a barrier spit: insights on large breach formation and its impact on barrier dynamics

    Science.gov (United States)

    Iulian Zăinescu, Florin; Vespremeanu-Stroe, Alfred; Tătui, Florin

    2017-04-01

    In this study, we document a case of exceptionally large natural breaching of a sandy spit (Sacalin barrier, Danube delta) using Lidar data and satellite imagery, annual (and seasonal) surveys of topography and bathymetry on successive cross-barrier profiles, and hourly datasets of wind and waves. The breach morphology and dynamics was monitored and described from its inception to closure, together with its impact on the adjoining features (upper shoreface, back-barrier lagoon, downdrift coast) and on the local sediment budgets. Breaching is first observed to occur on a beach-length of 0.5 km in April 2012 and two years later reached 3.5 km (May 2014). The barrier translates to a recovery stage dominated by continuous back-barrier deposition through subaqueous cross-breach sediment transport. Soon, the barrier widening triggers a negative feedback which limits the back-barrier sediment transfer. As a result, back-barrier deposition decreases whilst the barrier aggradation through overwash becomes more frequent. The event was found to be a natural experiment which switched the barrier's decadal evolution from low cross-shore transport to high cross-shore transport over the barrier. Although previously considered as constant, the cross-shore transport recorded during the large breach lifespan is an order of magnitude larger than in the non-breach period. 3 x 106 m3 of sediment were deposited in three years which is equivalent to the modelled longshore transport in the region. Nevertheless, the sediment circuits are more complex involving exchanges with the upper shoreface, as indicated by the extensive erosion down to -4m. In the absence of tides, the Sacalin breach closed naturally in 3 years and brings a valuable contribution on how breaches may evolve, as only limited data has been internationally reported until now. The very high deposition rate of sediment in the breach is a testimony of the high sediment volumes supplied by the longshore transport and the high

  9. Amorphous InGaMgO Ultraviolet Photo-TFT with Ultrahigh Photosensitivity and Extremely Large Responsivity

    Directory of Open Access Journals (Sweden)

    Yiyu Zhang

    2017-02-01

    Full Text Available Recently, amorphous InGaZnO ultraviolet photo thin-film transistors have exhibited great potential for application in future display technologies. Nevertheless, the transmittance of amorphous InGaZnO (~80% is still not high enough, resulting in the relatively large sacrifice of aperture ratio for each sensor pixel. In this work, the ultraviolet photo thin-film transistor based on amorphous InGaMgO, which processes a larger bandgap and higher transmission compared to amorphous InGaZnO, was proposed and investigated. Furthermore, the effects of post-deposition annealing in oxygen on both the material and ultraviolet detection characteristics of amorphous InGaMgO were also comprehensively studied. It was found that oxygen post-deposition annealing can effectively reduce oxygen vacancies, leading to an optimized device performance, including lower dark current, higher sensitivity, and larger responsivity. We attributed it to the combined effect of the reduction in donor states and recombination centers, both of which are related to oxygen vacancies. As a result, the 240-min annealed device exhibited the lowest dark current of 1.7 × 10−10 A, the highest photosensitivity of 3.9 × 106, and the largest responsivity of 1.5 × 104 A/W. Therefore, our findings have revealed that amorphous InGaMgO photo thin-film transistors are a very promising alternative for UV detection, especially for application in touch-free interactive displays.

  10. A short-term extremely low frequency electromagnetic field exposure increases circulating leukocyte numbers and affects HPA-axis signaling in mice

    NARCIS (Netherlands)

    Kleijn, de Stan; Ferwerda, Gerben; Wiese, Michelle; Trentelman, Jos; Cuppen, Jan; Kozicz, Tamas; Jager, de Linda; Hermans, Peter W.M.; Kemenade, van Lidy

    2016-01-01

    There is still uncertainty whether extremely low frequency electromagnetic fields (ELF-EMF) can induce health effects like immunomodulation. Despite evidence obtained in vitro, an unambiguous association has not yet been established in vivo. Here, mice were exposed to ELF-EMF for 1, 4, and 24 h/d

  11. [Treatment of gunshot fractures of the lower extremity: Part 1: Incidence, importance, case numbers, pathophysiology, contamination, principles of emergency and first responder treatment].

    Science.gov (United States)

    Franke, A; Bieler, D; Wilms, A; Hentsch, S; Johann, M; Kollig, E

    2014-11-01

    Gunshot wounds are rare in Germany and are mostly the result of suicide attempts or improper handling of weapons. The resulting injuries involve extensive tissue damage and complications which are thus unique and require a differentiated approach. As trauma centers may be confronted with gunshot wounds at any time, treatment principles must be understood and regularly reevaluated. Due to Bundeswehr operations abroad and the treatment of patients from other crisis regions a total of 85 gunshot wounds in 64 patients were treated between 2005 and 2011. In the majority of cases the lower extremities were affected and we were able to carry out treatment to preserve the extremities. In this article we report on our experiences and the results of treatment of gunshot wounds to the lower extremities. This part of the article deals with the epidemiology and pathophysiology of gunshot wounds to the lower extremities. By means of an evaluation of microbiological findings in a subgroup of patients involved in a civil war (n=10), the problem of multidrug resistant pathogen contamination, colonization and infection is discussed. In addition to a description of initial and emergency treatment of gunshot wounds, measures required for further treatment and decontamination are presented. Finally, the results are discussed with reference to the literature in this field.

  12. Extremely large anthropogenic-aerosol contribution to total aerosol load over the Bay of Bengal during winter season

    Directory of Open Access Journals (Sweden)

    D. G. Kaskaoutis

    2011-07-01

    Full Text Available Ship-borne observations of spectral aerosol optical depth (AOD have been carried out over the entire Bay of Bengal (BoB as part of the W-ICARB cruise campaign during the period 27 December 2008–30 January 2009. The results reveal a pronounced temporal and spatial variability in the optical characteristics of aerosols mainly due to anthropogenic emissions and their dispersion controlled by local meteorology. The highest aerosol amount, with mean AOD500>0.4, being even above 1.0 on specific days, is found close to the coastal regions in the western and northern parts of BoB. In these regions the Ångström exponent is also found to be high (~1.2–1.25 indicating transport of strong anthropogenic emissions from continental regions, while very high AOD500 (0.39±0.07 and α380–870 values (1.27±0.09 are found over the eastern BoB. Except from the large α380–870 values, an indication of strong fine-mode dominance is also observed from the AOD curvature, which is negative in the vast majority of the cases, suggesting dominance of an anthropogenic-pollution aerosol type. On the other hand, clean maritime conditions are rather rare over the region, while the aerosol types are further examined through a classification scheme based on the relationship between α and dα. It was found that even for the same α values the fine-mode dominance is larger for higher AODs showing the strong continental influence over the marine environment of BoB. Furthermore, there is also an evidence of aerosol-size growth under more turbid conditions indicative of coagulation and/or humidification over specific BoB regions. The results obtained using OPAC model show significant fraction of soot aerosols (~6 %–8 % over the eastern and northwestern BoB, while coarse-mode sea salt particles are found to dominate in the southern parts of BoB.

  13. Extreme fluctuations and the finite lifetime of the turbulent state.

    Science.gov (United States)

    Goldenfeld, Nigel; Guttenberg, Nicholas; Gioia, Gustavo

    2010-03-01

    We argue that the transition to turbulence is controlled by large amplitude events that follow extreme distribution theory. The theory suggests an explanation for recent observations of the turbulent state lifetime which exhibit superexponential scaling behavior with Reynolds number.

  14. Spitzer SAGE-Spec: Near Infrared Spectroscopy, Dust Shells, and Cool Envelopes in Extreme Large Magellanic Cloud Asymptotic Giant Branch Stars

    Science.gov (United States)

    Blum, R. D.; Srinivasan, S.; Kemper, F.; Ling, B.; Volk, K.

    2014-11-01

    K-band spectra are presented for a sample of 39 Spitzer Infrared Spectrograph (IRS) SAGE-Spec sources in the Large Magellanic Cloud. The spectra exhibit characteristics in very good agreement with their positions in the near-infrared—Spitzer color-magnitude diagrams and their properties as deduced from the Spitzer IRS spectra. Specifically, the near-infrared spectra show strong atomic and molecular features representative of oxygen-rich and carbon-rich asymptotic giant branch stars, respectively. A small subset of stars was chosen from the luminous and red extreme ``tip" of the color-magnitude diagram. These objects have properties consistent with dusty envelopes but also cool, carbon-rich ``stellar" cores. Modest amounts of dust mass loss combine with the stellar spectral energy distribution to make these objects appear extreme in their near-infrared and mid-infrared colors. One object in our sample, HV 915, a known post-asymptotic giant branch star of the RV Tau type, exhibits CO 2.3 μm band head emission consistent with previous work that demonstrates that the object has a circumstellar disk. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).

  15. Spitzer SAGE-Spec: Near infrared spectroscopy, dust shells, and cool envelopes in extreme Large Magellanic Cloud asymptotic giant branch stars

    Energy Technology Data Exchange (ETDEWEB)

    Blum, R. D. [NOAO, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); Srinivasan, S.; Kemper, F.; Ling, B. [Academia Sinica, Institute of Astronomy and Astrophysics, 11F of Astronomy-Mathematics Building, NTU/AS, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan (China); Volk, K. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2014-11-01

    K-band spectra are presented for a sample of 39 Spitzer Infrared Spectrograph (IRS) SAGE-Spec sources in the Large Magellanic Cloud. The spectra exhibit characteristics in very good agreement with their positions in the near-infrared—Spitzer color-magnitude diagrams and their properties as deduced from the Spitzer IRS spectra. Specifically, the near-infrared spectra show strong atomic and molecular features representative of oxygen-rich and carbon-rich asymptotic giant branch stars, respectively. A small subset of stars was chosen from the luminous and red extreme ''tip'' of the color-magnitude diagram. These objects have properties consistent with dusty envelopes but also cool, carbon-rich ''stellar'' cores. Modest amounts of dust mass loss combine with the stellar spectral energy distribution to make these objects appear extreme in their near-infrared and mid-infrared colors. One object in our sample, HV 915, a known post-asymptotic giant branch star of the RV Tau type, exhibits CO 2.3 μm band head emission consistent with previous work that demonstrates that the object has a circumstellar disk.

  16. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies.

    Science.gov (United States)

    Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen

    2014-06-21

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.

  17. Soft sensor of chemical processes with large numbers of input parameters using auto-associative hierarchical neural network

    Institute of Scientific and Technical Information of China (English)

    Yanlin He; Yuan Xu; Zhiqiang Geng; Qunxiong Zhu

    2015-01-01

    To explore the problems of monitoring chemical processes with large numbers of input parameters, a method based on Auto-associative Hierarchical Neural Network (AHNN) is proposed. AHNN focuses on dealing with datasets in high-dimension. AHNNs consist of two parts:groups of subnets based on well trained Auto-associative Neural Networks (AANNs) and a main net. The subnets play an important role on the performance of AHNN. A simple but effective method of designing the subnets is developed in this paper. In this method, the subnets are designed according to the classification of the data attributes. For getting the classification, an effective method called Extension Data Attributes Classification (EDAC) is adopted. Soft sensor using AHNN based on EDAC (EDAC-AHNN) is introduced. As a case study, the production data of Purified Terephthalic Acid (PTA) solvent system are selected to examine the proposed model. The results of the EDAC-AHNN model are compared with the experimental data extracted from the literature, which shows the efficiency of the proposed model.

  18. Enhancement in evaluating small group work in courses with large number of students. Machine theory at industrial engineering degrees

    Directory of Open Access Journals (Sweden)

    Lluïsa Jordi Nebot

    2013-03-01

    Full Text Available This article examines new tutoring evaluation methods to be adopted in the course, Machine Theory, in the Escola Tècnica Superior d’Enginyeria Industrial de Barcelona (ETSEIB, Universitat Politècnica de Catalunya. These new methods have been developed in order to facilitate teaching staff work and include students in the evaluation process. Machine Theory is a required course with a large number of students. These students are divided into groups of three, and required to carry out a supervised work constituting 20% of their final mark. These new evaluation methods were proposed in response to the significant increase of students in spring semester of 2010-2011, and were pilot tested during fall semester of academic year 2011-2012, in the previous Industrial Engineering degree program. Pilot test results were highly satisfactory for students and teachers, alike, and met proposed educational objectives. For this reason, the new evaluation methodology was adopted in spring semester of 2011-2012, in the current bachelor’s degree program in Industrial Technology (Grau en Enginyeria en Tecnologies Industrials, GETI, where it has also achieved highly satisfactory results.

  19. International Airport Impacts to Air Quality: Size and Related Properties of Large Increases in Ultrafine Particle Number Concentrations.

    Science.gov (United States)

    Hudda, N; Fruin, S A

    2016-04-05

    We measured particle size distributions and spatial patterns of particle number (PN) and particle surface area concentrations downwind from the Los Angeles International Airport (LAX) where large increases (over local background) in PN concentrations routinely extended 18 km downwind. These elevations were mostly comprised of ultrafine particles smaller than 40 nm. For a given downwind distance, the greatest increases in PN concentrations, along with the smallest mean sizes, were detected at locations under the landing jet trajectories. The smaller size of particles in the impacted area, as compared to the ambient urban aerosol, increased calculated lung deposition fractions to 0.7-0.8 from 0.5-0.7. A diffusion charging instrument (DiSCMini), that simulates alveolar lung deposition, measured a fivefold increase in alveolar-lung deposited surface area concentrations 2-3 km downwind from the airport (over local background), decreasing steadily to a twofold increase 18 km downwind. These ratios (elevated lung-deposited surface area over background) were lower than the corresponding ratios for elevated PN concentrations, which decreased from tenfold to twofold over the same distance, but the spatial patterns of elevated concentrations were similar. It appears that PN concentration can serve as a nonlinear proxy for lung deposited surface area downwind of major airports.

  20. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies

    Science.gov (United States)

    2014-01-01

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054

  1. Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park

    Directory of Open Access Journals (Sweden)

    Ferreira Sanette L.A.

    2016-09-01

    Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.

  2. Identification of the non-stationarity of extreme precipitation events and correlations with large-scale ocean-atmospheric circulation patterns: A case study in the Wei River Basin, China

    Science.gov (United States)

    Liu, Saiyan; Huang, Shengzhi; Huang, Qiang; Xie, Yangyang; Leng, Guoyong; Luan, Jinkai; Song, Xiaoyu; Wei, Xiu; Li, Xiangyang

    2017-05-01

    The investigation of extreme precipitation events in terms of variation characteristics, stationarity, and their underlying causes is of great significance to better understand the regional response of the precipitation variability to global climate change. In this study, the Wei River Basin (WRB), a typical eco-environmentally vulnerable region of the Loess Plateau in China was selected as the study region. A set of precipitation indices was adopted to study the changing patterns of precipitation extremes and the stationarity of extreme precipitation events. Furthermore, the correlations between the Pacific Decadal Oscillation (PDO)/El Niño-Southern Oscillation (ENSO) events and precipitation extremes were explored using the cross wavelet technique. The results indicate that: (1) extreme precipitation events in the WRB are characterized by a significant decrease of consecutive wet days (CWD) at the 95% confidence level; (2) compared with annual precipitation, daily precipitation extremes are much more sensitive to changing environments, and the assumption of stationarity of extreme precipitation in the WRB is invalid, especially in the upstream, thereby introducing large uncertainty to the design and management of water conservancy engineering; (3) both PDO and ENSO events have a strong influence on precipitation extremes in the WRB. These findings highlight the importance of examining the validity of the stationarity assumption in extreme hydrological frequency analysis, which has great implications for the prediction of extreme hydrological events.

  3. Real-time turbulence profiling with a pair of laser guide star Shack-Hartmann wavefront sensors for wide-field adaptive optics systems on large to extremely large telescopes.

    Science.gov (United States)

    Gilles, L; Ellerbroek, B L

    2010-11-01

    Real-time turbulence profiling is necessary to tune tomographic wavefront reconstruction algorithms for wide-field adaptive optics (AO) systems on large to extremely large telescopes, and to perform a variety of image post-processing tasks involving point-spread function reconstruction. This paper describes a computationally efficient and accurate numerical technique inspired by the slope detection and ranging (SLODAR) method to perform this task in real time from properly selected Shack-Hartmann wavefront sensor measurements accumulated over a few hundred frames from a pair of laser guide stars, thus eliminating the need for an additional instrument. The algorithm is introduced, followed by a theoretical influence function analysis illustrating its impulse response to high-resolution turbulence profiles. Finally, its performance is assessed in the context of the Thirty Meter Telescope multi-conjugate adaptive optics system via end-to-end wave optics Monte Carlo simulations.

  4. ADF95: Tool for automatic differentiation of a FORTRAN code designed for large numbers of independent variables

    Science.gov (United States)

    Straka, Christian W.

    2005-06-01

    ADF95 is a tool to automatically calculate numerical first derivatives for any mathematical expression as a function of user defined independent variables. Accuracy of derivatives is achieved within machine precision. ADF95 may be applied to any FORTRAN 77/90/95 conforming code and requires minimal changes by the user. It provides a new derived data type that holds the value and derivatives and applies forward differencing by overloading all FORTRAN operators and intrinsic functions. An efficient indexing technique leads to a reduced memory usage and a substantially increased performance gain over other available tools with operator overloading. This gain is especially pronounced for sparse systems with large number of independent variables. A wide class of numerical simulations, e.g., those employing implicit solvers, can profit from ADF95. Program summaryTitle of program:ADF95 Catalogue identifier: ADVI Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVI Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed: all platforms with a FORTRAN 95 compiler Programming language used:FORTRAN 95 No. of lines in distributed program, including test data, etc.: 3103 No. of bytes in distributed program, including test data, etc.: 9862 Distribution format: tar.gz Nature of problem: In many areas in the computational sciences first order partial derivatives for large and complex sets of equations are needed with machine precision accuracy. For example, any implicit or semi-implicit solver requires the computation of the Jacobian matrix, which contains the first derivatives with respect to the independent variables. ADF95 is a software module to facilitate the automatic computation of the first partial derivatives of any arbitrarily complex mathematical FORTRAN expression. The program exploits the sparsity inherited by many set of equations thereby enabling faster computations compared to alternate

  5. Email-Based Informed Consent: Innovative Method for Reaching Large Numbers of Subjects for Data Mining Research

    Science.gov (United States)

    Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary

    2015-01-01

    Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP

  6. Large Extremity Peripheral Nerve Repair

    Science.gov (United States)

    2015-10-01

    factors and mis- guided axons into adjacent tissues further compromises outcome and likely contributes to neuroma formation. These effects are... effects on neurite outgrowth and can support axonal regeneration in the absence of SCs. Whilst this may be sufficient over short lengths of ANA... effective for nerve regeneration than autograft in clinical implementation using microsurgical attachment, we hypothesized that the photosealing benefit may

  7. Large Extremity Peripheral Nerve Repair

    Science.gov (United States)

    2014-10-01

    Nerve wrap biomaterials Human amniotic membrane was obtained from elective caesarean section patients who had been screened serologically for human...80°C until the day of surgery. Human amnion (HAM) harvest and processing Amniotic membrane was obtained from elective caesarean section patients

  8. Large Extremity Peripheral Nerve Repair

    Science.gov (United States)

    2016-12-01

    Photochemical bond- ing required clear access 5 mm proximal and dis- tal to coaptation sites. As a result, the maximum achievable nerve gap before...rodents for nerve gap reconstruction. Induction and maintenance anesthesia was achieved using isoflurane (Baxter Healthcare Corp., Deerfield, Ill...injury, nerve gap , nerve wrap, PTB, photosealing, Rose Bengal, amnion, nerve conduit, crosslinking, allograft, photochemistry. 3. Accomplishments

  9. THE ESO EXTREMELY LARGE TELESCOPE

    Directory of Open Access Journals (Sweden)

    J. Melnick

    2011-01-01

    Full Text Available Los instrumentos robóticos utilizados en campañas de evaluación de sitios generan una gran cantidad de información, esencialmente acerca de todos los parámentros relevantes de la atmósfera. Comenzando por suposiciones relativamente genéricas, es posible capturar esta riqueza de información en una figura sencilla de mérito para cada sitio, lo cual simplifica algunas de las etapas del proceso de evaluación de sitio. Esta contribución presenta dos formalismos diferentes que fueron usados para evaluar la función de mérito de selección de sitio para el E-ELT. Ambos formalismos recaen en suposiciones acerca de las formas en que se usará el telescopio -los modos científicos de operación- pero mientras un algoritmo calcula las figuras de mérito promediadas sobre todo el tiempo de la campaña de evaluación de sitio (típicamente 2 años, el otro explora la variabilidad de las condiciones de observación durante la noche, y de noche a noche durante la campaña. Se encontró que en general, los dos métodos arrojan resultados diferentes, señalando la importancia de incluir la variabilidad como un parámetro fundamental para caracterizar los sitios astronómicos para telescopios grandes operados en modo de programación de cola. Sin embargo, los dos mejores sitios potenciales para E-ELT están clasificados como mejores por ambos métodos.

  10. Large Extremity Peripheral Nerve Repair

    Science.gov (United States)

    2016-12-01

    71. Burman S, Tejwani S, Vemuganti GK. Ophthalmic applications of preserved human amniotic membrane: a review of current indications. Cell Tissue Bank...segmental nerve deficit repair using isograft show the best performing wrap/ fixation method to be sutureless photochemical tissue bonding with the...crosslinked amnion wrap. Autograft is often unavailable in wounded warriors, due to extensive tissue damage and amputation and, importantly, we also

  11. Small on the Left, Large on the Right: Numbers Orient Visual Attention onto Space in Preverbal Infants

    Science.gov (United States)

    Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola

    2016-01-01

    Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…

  12. An extremely large magnitude eruption close to the Plio-Pleistocene boundary: reconstruction of eruptive style and history of the Ebisutoge-Fukuda tephra, central Japan

    Science.gov (United States)

    Kataoka, K.; Nagahashi, Y.; Yoshikawa, S.

    2001-06-01

    An extremely large magnitude eruption of the Ebisutoge-Fukuda tephra, close to the Plio-Pleistocene boundary, central Japan, spread volcanic materials widely more than 290,000 km 2 reaching more than 300 km from the probable source. Characteristics of the distal air-fall ash (>150 km away from the vent) and proximal pyroclastic deposits are clarified to constrain the eruptive style, history, and magnitude of the Ebisutoge-Fukuda eruption. Eruptive history had five phases. Phase 1 is phreatoplinian eruption producing >105 km 3 of volcanic materials. Phases 2 and 3 are plinian eruption and transition to pyroclastic flow. Plinian activity also occurred in phase 4, which ejected conspicuous obsidian fragments to the distal locations. In phase 5, collapse of eruption column triggered by phase 4, generated large pyroclastic flow in all directions and resulted in more than 250-350 km 3 of deposits. Thus, the total volume of this tephra amounts over 380-490 km 3. This indicates that the Volcanic Explosivity Index (VEI) of the Ebisutoge-Fukuda tephra is greater than 7. The huge thickness of reworked volcaniclastic deposits overlying the fall units also attests to the tremendous volume of eruptive materials of this tephra. Numerous ancient tephra layers with large volume have been reported worldwide, but sources and eruptive history are often unknown and difficult to determine. Comparison of distal air-fall ashes with proximal pyroclastic deposits revealed eruption style, history and magnitude of the Ebisutoge-Fukuda tephra. Hence, recognition of the Ebisutoge-Fukuda tephra, is useful for understanding the volcanic activity during the Pliocene to Pleistocene, is important as a boundary marker bed, and can be used to interpret the global environmental and climatic impact of large magnitude eruptions in the past.

  13. Convergence in the r-th Mean and the Marcinkiewicz Type Weak Law of Large Numbers for Weighted Sums of Lq-mixingale Arrays

    Institute of Scientific and Technical Information of China (English)

    Gan Shi-xin

    2003-01-01

    Lr convergence and convergence in probability for weighted sums of Lq-mixingale arrays have been discussed and the Marcinkiewicz type weak law of large numbers for Lq-mixingale arrays has been obtained.

  14. Are rheumatoid arthritis patients discernible from other early arthritis patients using 1.5T extremity magnetic resonance imaging? a large cross-sectional study.

    Science.gov (United States)

    Stomp, Wouter; Krabben, Annemarie; van der Heijde, Désirée; Huizinga, Tom W J; Bloem, Johan L; van der Helm-van Mil, Annette H M; Reijnierse, Monique

    2014-08-01

    Magnetic resonance imaging (MRI) is increasingly used in rheumatoid arthritis (RA) research. A European League Against Rheumatism (EULAR) task force recently suggested that MRI can improve the certainty of RA diagnosis. Because this recommendation may reflect a tendency to use MRI in daily practice, thorough studies on the value of MRI are required. Thus far no large studies have evaluated the accuracy of MRI to differentiate early RA from other patients with early arthritis. We performed a large cross-sectional study to determine whether patients who are clinically classified with RA differ in MRI features compared to patients with other diagnoses. In our study, 179 patients presenting with early arthritis (median symptom duration 15.4 weeks) underwent 1.5T extremity MRI of unilateral wrist, metacarpophalangeal, and metatarsophalangeal joints according to our arthritis protocol, the foot without contrast. Images were scored according to OMERACT Rheumatoid Arthritis Magnetic Resonance Imaging Scoring (RAMRIS) by 2 independent readers. Tenosynovitis was also assessed. The main outcome was fulfilling the 1987 American College of Rheumatology (ACR) criteria for RA. Test characteristics and areas under the receiver-operator-characteristic curves (AUC) were evaluated. In subanalyses, the 2010 ACR/EULAR criteria were used as outcome, and analyses were stratified for anticitrullinated protein antibodies (ACPA). The ACR 1987 criteria were fulfilled in 43 patients (24.0%). Patients with RA had higher scores for synovitis, tenosynovitis, and bone marrow edema (BME) than patients without RA (p arthritis patients.

  15. Large scale copy number variation (CNV at 14q12 is associated with the presence of genomic abnormalities in neoplasia

    Directory of Open Access Journals (Sweden)

    Turley Stefanie

    2006-06-01

    Full Text Available Abstract Background Advances made in the area of microarray comparative genomic hybridization (aCGH have enabled the interrogation of the entire genome at a previously unattainable resolution. This has lead to the discovery of a novel class of alternative entities called large-scale copy number variations (CNVs. These CNVs are often found in regions of closely linked sequence homology called duplicons that are thought to facilitate genomic rearrangements in some classes of neoplasia. Recently, it was proposed that duplicons located near the recurrent translocation break points on chromosomes 9 and 22 in chronic myeloid leukemia (CML may facilitate this tumor-specific translocation. Furthermore, ~15–20% of CML patients also carry a microdeletion on the derivative 9 chromosome (der(9 and these patients have a poor prognosis. It has been hypothesised that der(9 deletion patients have increased levels of chromosomal instability. Results In this study aCGH was performed and identified a CNV (RP11-125A5, hereafter called CNV14q12 that was present as a genomic gain or loss in 10% of control DNA samples derived from cytogenetically normal individuals. CNV14q12 was the same clone identified by Iafrate et al. as a CNV. Real-time polymerase chain reaction (Q-PCR was used to determine the relative frequency of this CNV in DNA from a series of 16 CML patients (both with and without a der(9 deletion together with DNA derived from 36 paediatric solid tumors in comparison to the incidence of CNV in control DNA. CNV14q12 was present in ~50% of both tumor and CML DNA, but was found in 72% of CML bearing a der(9 microdeletion. Chi square analysis found a statistically significant difference (p ≤ 0.001 between the incidence of this CNV in cancer and normal DNA and a slightly increased incidence in CML with deletions in comparison to those CML without a detectable deletion. Conclusion The increased incidence of CNV14q12 in tumor samples suggests that either

  16. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    NARCIS (Netherlands)

    Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.

    2009-01-01

    The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that lead

  17. Extremely large electronic anisotropy caused by electronic phase separation in Ca3(Ru0.97Ti0.03)2O7 single crystal

    Science.gov (United States)

    Peng, Jing; Wu, Xiaoshan; Mao, Zhiqiang

    2015-03-01

    Bilayered ruthenate Ca3 Ru2O7 exhibits rich electronic and magnetic properties. It orders at 56K, with FM bilayers antiferromagnetically coupled along c-axis (AFM-a). The AFM transition is closely followed by a first-order metal-insulator (MI) transition at 48K where spin directions switch to the b-axis (AFM-b). While this MI transition is accompanied by the opening of anisotropic charge gap; small Fermi pockets survive from the MI transition, thus resulting in quasi-2D metallic transport behavior for Tinsulating state with a nearest-neighbor AFM order via Ti doping. Ca3(Ru0 . 97 Ti0 . 03) 2O7 is close to the critical composition for the AFM-b-to-G-AFM phase transition. Our recent studies show the sample with this composition is characterized by an electronic phase separation between the insulating G-AFM phase (major) and the localized AFM-b phase (minor). The minor AFM-b phase forms a conducting path through electronic percolation within the ab-plane, but not along the c-axis, thus resulting in extremely large electronic anisotropy with ρab /ρc ~109 , which may be the largest among bulk materials.

  18. Analysis of the safety profile of treatment with a large number of shock waves per session in extracorporeal lithotripsy.

    Science.gov (United States)

    Budía Alba, A; López Acón, J D; Polo-Rodrigo, A; Bahílo-Mateu, P; Trassierra-Villa, M; Boronat-Tormo, F

    2015-06-01

    To assess the safety of increasing the number of waves per session in the treatment of urolithiasis using extracorporeal lithotripsy. Prospective, comparative, nonrandomized parallel study of patients with renoureteral lithiasis and an indication for extracorporeal lithotripsy who were consecutively enrolled between 2009 and 2010. We compared group I (160 patients) treated on schedule with a standard number of waves/session (mean 2858,3±302,8) using a Dornier lithotripter U/15/50 against group II (172 patients) treated with an expanded number of waves/session (mean, 6728,9±889,6) using a Siemens Modularis lithotripter. The study variables were age, sex, location, stone size, number of waves/session and total number of waves to resolution, stone-free rate (SFR) and rate of complications (Clavien-Dindo classification). Student's t-test and the chi-squared test were employed for the statistical analysis. The total rate of complications was 11.9% and 10.46% for groups I and II, respectively (P=.39). All complications were minor (Clavien-Dindo grade I). The most common complications were colic pain and hematuria in groups I and II, respectively, with a similar treatment intolerance rate (P>.05). The total number of waves necessary was lower in group II than in group I (P=.001), with SFRs of 96.5% and 71.5%, respectively (P=.001). Treatment with an expanded number of waves per session in extracorporeal lithotripsy does not increase the rate of complications or their severity. However, it could increase the overall effectiveness of the treatment. Copyright © 2014 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  19. Extreme Heat

    Science.gov (United States)

    ... Landslides & Debris Flow Nuclear Blast Nuclear Power Plants Power Outages Pandemic Radiological Dispersion Device Severe Weather Snowstorms & Extreme ... Landslides & Debris Flow Nuclear Blast Nuclear Power Plants Power Outages Pandemic Radiological Dispersion Device Severe Weather Snowstorms & Extreme ...

  20. Extreme winter warming events more negatively impact small rather than large soil fauna: shift in community composition explained by traits not taxa

    NARCIS (Netherlands)

    Bokhorst, S.; Phoenix, G.K.; Berke, J.W.; Callaghan, T.V.; Huyer-Brugman, F.; Berg, M.P.

    2012-01-01

    Extreme weather events can have negative impacts on species survival and community structure when surpassinglethal thresholds. Extreme winter warming events in the Arctic rapidly melt snow and expose ecosystems to unseasonablywarm air (2–10 °C for 2–14 days), but returning to cold winter climate exp

  1. 牵拉成骨技术治疗下肢大段骨缺损%Distraction osteogenesis for large bone defect of the lower extremity

    Institute of Scientific and Technical Information of China (English)

    王富明; 陈鸿奋; 陈滨; 任高宏; 杨运平; 秦煜; 王钢

    2011-01-01

    Objective To discuss the therapeutic effect of distraction osteogenesis for large bone defect of the lower extremity. Methods From August 2002 to August 2010, 11 patients with large bone defect at the lower extremity were treated with distraction osteogenesis. They were 10 men and one woman, aged from 14 to 53 years (average, 34. S years). The defect was at the right tibia in 7 cases, left tibia in 3 cases and right femur in one case. The lengths of bone defect ranged from 5 to 15 cm (average, 8. 6 cm). Results The patients were followed up for 7 to 48 months, with a mean period of 27. 3 months. The treatment of 9 cases was over, with a mean healing index of 1. 99 months/cm. According to the Paley evaluation system, the bony results were excellent in 6 and good in 3 patients; the functional results were excellent in 4, good in 4, and fair in one patient. Two cases were still in the mineralization period. Conclusion Treatment of large bone defects with distraction osteogenesis is simple and can obtain satisfactory therapeutic effects, especially when a monolateral external fixator is used for the simple shaft bone defect.%目的 探讨应用牵拉成骨技术治疗下肢大段骨缺损的临床疗效.方法 回顾性分析2002年8月至2010年8月收治的11例下肢大段骨缺损患者临床资料,男10例,女1例;年龄14~53岁,平均34.5岁.均行牵拉成骨治疗,右侧胫骨7例,左侧胫骨3例,右侧股骨1例;骨缺损长度5~15cm,平均8.6 cm;9例治疗已结束,2例仍处于矿化阶段.结果 所有患者术后获7~48个月(平均27.3个月)随访.9例治疗结束患者,平均骨愈合指数为1.99个月/cm;根据Paley评价系统评价骨性结果:优6例,良3例,优良率为100%;功能结果:优4例,良4例,一般1例.结论应用牵拉成骨技术治疗大段骨缺损,手术操作简单,尤其是对于单纯骨干缺损患者,采用单边外固定支架治疗,且其手术操作更为简洁.

  2. A few large roads or many small ones? How to accommodate growth in vehicle numbers to minimise impacts on wildlife.

    Directory of Open Access Journals (Sweden)

    Jonathan R Rhodes

    Full Text Available Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1 increasing the number of roads, and (2 increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.

  3. Types and Numbers of Sensilla on Antennae and Maxillary Palps of Small and Large Houseflies, Musca domestica (Diptera, Muscidae)

    NARCIS (Netherlands)

    Smallegange, Renate C.; Kelling, Frits J.; Den Otter, Cornelis J.

    2008-01-01

    Houseflies, Musca domestica, obtained from a high-larval-density culture were significantly (ca. 1.5 times) smaller than those from a low-larval-density culture. The same held true for their antennae and maxillary palps. Structure, number, and distribution of sensilla on antennae and palps of small

  4. A few large roads or many small ones? How to accommodate growth in vehicle numbers to minimise impacts on wildlife.

    Science.gov (United States)

    Rhodes, Jonathan R; Lunney, Daniel; Callaghan, John; McAlpine, Clive A

    2014-01-01

    Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.

  5. A Study of Low-Reynolds Number Effects in Backward-Facing Step Flow Using Large Eddy Simulations

    DEFF Research Database (Denmark)

    Davidson, Lars; Nielsen, Peter V.

    The flow in ventilated rooms is often not fully turbulent, but in some regions the flow can be laminar. Problems have been encountered when simulating this type of flow using RANS (Reynolds Averaged Navier-Stokes) methods. Restivo carried out experiment on the flow after a backward-facing step......, with a large step....

  6. Mandelbrot's Extremism

    NARCIS (Netherlands)

    Beirlant, J.; Schoutens, W.; Segers, J.J.J.

    2004-01-01

    In the sixties Mandelbrot already showed that extreme price swings are more likely than some of us think or incorporate in our models.A modern toolbox for analyzing such rare events can be found in the field of extreme value theory.At the core of extreme value theory lies the modelling of maxima

  7. Wall-modeled large eddy simulation of turbulent channel flow at high Reynolds number using the von Karman length scale

    Science.gov (United States)

    Xu, Jinglei; Li, Meng; Zhang, Yang; Chen, Longfei

    2016-12-01

    The von Karman length scale is able to reflect the size of the local turbulence structure. However, it is not suitable for the near wall region of wall-bounded flows, for its value is almost infinite there. In the present study, a simple and novel length scale combining the wall distance and the von Karman length scale is proposed by introducing a structural function. The new length scale becomes the von Karman length scale once local unsteady structures are detected. The proposed method is adopted in a series of turbulent channel flows at different Reynolds numbers. The results show that the proposed length scale with the structural function can precisely simulate turbulence at high Reynolds numbers, even with a coarse grid resolution.

  8. 一类强大数定律的推广与应用%Generalization and Application for a Class of Strong Laws of Large Numbers

    Institute of Scientific and Technical Information of China (English)

    邱育锋

    2012-01-01

    The method of summability for a class of random variable sequence is introduced.The sufficient and necessary conditions for a class of strong laws of large numbers are proved.It is the generalization in some sense of the classical strong laws of large numbers of Kolmogorov and that of Marcinkiewicz.%引入了随机变量序列的一类可求和方法,证明了一类强大数定律成立的充要条件。它们还是Kolmogorov和Marcinkiewicz两个经典强大数定律在某种意义上的推广。

  9. Striving to Build Large Numbers of Long-lasting Railway Passenger Stations with Innovative Concept of Construction

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    New railway passenger stations should be built by persisting in human-oriented concept and embodying the functional,systematic,advanced,cultural and economical features.According to the general requirements of the Railway Eleventh Five-year Development Plan and of building harmonious railway by MOR,the overall arrangement for the planning of railway passenger stations during the period is clearly defined.The design of Beijing South Railway Station fully embodies the principles of five features.It is a landmark and exemplary project of the Railway Eleventh Five-year Plan and a classical work of the large integrated traffic junction.

  10. Antiserum against Raoultella terrigena ATCC 33257 identifies a large number of Raoultella and Klebsiella clinical isolates as serotype O12.

    Science.gov (United States)

    Mertens, Katja; Müller-Loennies, Sven; Stengel, Petra; Podschun, Rainer; Hansen, Dennis S; Mamat, Uwe

    2010-12-01

    Raoultella terrigena ATCC 33257, recently reclassified from the genus Klebsiella, is a drinking water isolate and belongs to a large group of non-typeable Klebsiella and Raoultella strains. Using an O-antiserum against a capsule-deficient mutant of this strain, we could show a high prevalence (10.5%) of the R. terrigena O-serotype among non-typeable, clinical Klebsiella and Raoultella isolates. We observed a strong serological cross-reaction with the K. pneumoniae O12 reference strain, indicating that a large percentage of these non-typeable strains may belong to the O12 serotype, although these are currently not detectable by the K. pneumoniae O12 reference antiserum in use. Therefore, we analyzed the O-polysaccharide (O-PS) structure and genetic organization of the wb gene cluster of R. terrigena ATCC 33257, and both confirmed a close relation of R. terrigena and K. pneumoniae O12. The two strains possess an identical O-PS, lipopolysaccharide core structure, and genetic organization of the wb gene cluster. Heterologous expression of the R. terrigena wb gene cluster in Escherichia coli K-12 resulted in the WecA-dependent synthesis of an O-PS reactive with the K. pneumoniae O12 antiserum. The serological data presented here suggest a higher prevalence of the O12-serotype among Klebsiella and Raoultella isolates than generally assumed.

  11. How to implement a quantum algorithm on a large number of qubits by controlling one central qubit

    Science.gov (United States)

    Zagoskin, Alexander; Ashhab, Sahel; Johansson, J. R.; Nori, Franco

    2010-03-01

    It is desirable to minimize the number of control parameters needed to perform a quantum algorithm. We show that, under certain conditions, an entire quantum algorithm can be efficiently implemented by controlling a single central qubit in a quantum computer. We also show that the different system parameters do not need to be designed accurately during fabrication. They can be determined through the response of the central qubit to external driving. Our proposal is well suited for hybrid architectures that combine microscopic and macroscopic qubits. More details can be found in: A.M. Zagoskin, S. Ashhab, J.R. Johansson, F. Nori, Quantum two-level systems in Josephson junctions as naturally formed qubits, Phys. Rev. Lett. 97, 077001 (2006); and S. Ashhab, J.R. Johansson, F. Nori, Rabi oscillations in a qubit coupled to a quantum two-level system, New J. Phys. 8, 103 (2006).

  12. DNA Bipedal Motor Achieves a Large Number of Steps Due to Operation Using Microfluidics-Based Interface.

    Science.gov (United States)

    Tomov, Toma E; Tsukanov, Roman; Glick, Yair; Berger, Yaron; Liber, Miran; Avrahami, Dorit; Gerber, Doron; Nir, Eyal

    2017-04-25

    Realization of bioinspired molecular machines that can perform many and diverse operations in response to external chemical commands is a major goal in nanotechnology, but current molecular machines respond to only a few sequential commands. Lack of effective methods for introduction and removal of command compounds and low efficiencies of the reactions involved are major reasons for the limited performance. We introduce here a user interface based on a microfluidics device and single-molecule fluorescence spectroscopy that allows efficient introduction and removal of chemical commands and enables detailed study of the reaction mechanisms involved in the operation of synthetic molecular machines. The microfluidics provided 64 consecutive DNA strand commands to a DNA-based motor system immobilized inside the microfluidics, driving a bipedal walker to perform 32 steps on a DNA origami track. The microfluidics enabled removal of redundant strands, resulting in a 6-fold increase in processivity relative to an identical motor operated without strand removal and significantly more operations than previously reported for user-controlled DNA nanomachines. In the motor operated without strand removal, redundant strands interfere with motor operation and reduce its performance. The microfluidics also enabled computer control of motor direction and speed. Furthermore, analysis of the reaction kinetics and motor performance in the absence of redundant strands, made possible by the microfluidics, enabled accurate modeling of the walker processivity. This enabled identification of dynamic boundaries and provided an explanation, based on the "trap state" mechanism, for why the motor did not perform an even larger number of steps. This understanding is very important for the development of future motors with significantly improved performance. Our universal interface enables two-way communication between user and molecular machine and, relying on concepts similar to that of solid

  13. Estimating Latent Variable Interactions with the Unconstrained Approach: A Comparison of Methods to Form Product Indicators for Large, Unequal Numbers of Items

    Science.gov (United States)

    Jackman, M. Grace-Anne; Leite, Walter L.; Cochrane, David J.

    2011-01-01

    This Monte Carlo simulation study investigated methods of forming product indicators for the unconstrained approach for latent variable interaction estimation when the exogenous factors are measured by large and unequal numbers of indicators. Product indicators were created based on multiplying parcels of the larger scale by indicators of the…

  14. Estimating Latent Variable Interactions with the Unconstrained Approach: A Comparison of Methods to Form Product Indicators for Large, Unequal Numbers of Items

    Science.gov (United States)

    Jackman, M. Grace-Anne; Leite, Walter L.; Cochrane, David J.

    2011-01-01

    This Monte Carlo simulation study investigated methods of forming product indicators for the unconstrained approach for latent variable interaction estimation when the exogenous factors are measured by large and unequal numbers of indicators. Product indicators were created based on multiplying parcels of the larger scale by indicators of the…

  15. A software application for comparing large numbers of high resolution MALDI-FTICR MS spectra demonstrated by searching candidate biomarkers for glioma blood vessel formation

    NARCIS (Netherlands)

    M.K. Titulaer (Mark); D.A.M. Mustafa (Dana); I. Siccama (Ivar); M. Konijnenburg (Marco); P.C. Burgers (Peter); A.C. Andeweg (Arno); P.A. Smitt (Peter); J.M. Kros (Johan); T.M. Luider (Theo)

    2008-01-01

    textabstractBackground: A Java™ application is presented, which compares large numbers (n > 100) of raw FTICR mass spectra from patients and controls. Two peptide profile matrices can be produced simultaneously, one with occurrences of peptide masses in samples and another with the intensity of comm

  16. A dynamic response model for pressure sensors in continuum and high Knudsen number flows with large temperature gradients

    Science.gov (United States)

    Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.

    1996-01-01

    This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.

  17. Spatial periodicity of galaxy number counts from Fourier analysis of the large scale surveys of galaxies in the Universe

    CERN Document Server

    Hartnett, John G

    2007-01-01

    A Fourier analysis has been carried out on the galaxy number count as a function of redshift, the $N$-$z$ relation, calculated from redshift data of both the Sloan Digital Sky Survey (SDSS) and the 2dF Galaxy Redshift Survey (2dF GRS). Regardless of the interpretation of those redshifts, the results indicate that galaxies have preferred periodic redshifts. This is the \\textit{picket-fence} structure observed by some. Application of the Hubble law, at low redshift, results in galaxies preferentially located on concentric shells with periodic spacings. This analysis finds significant redshift spacings of $\\Delta z =$ 0.0102, 0.0246, and 0.0448 in the SDSS and strong agreement with the results from 2dF GRS. The combined results from both surveys indicate regular real space spacings of $44.0 \\pm 2.5$ $Mpc$, $102 \\pm 8$ $Mpc$ and $176 \\pm 29$ $Mpc$, for an assumed Hubble's constant $H_0 = 72 km s^{-1} Mpc^{-1}$. These results indicate that it is a real effect and not some observational artifact. The effect is sign...

  18. Drug discovery using very large numbers of patents. General strategy with extensive use of match and edit operations

    Science.gov (United States)

    Robson, Barry; Li, Jin; Dettinger, Richard; Peters, Amanda; Boyer, Stephen K.

    2011-05-01

    A patent data base of 6.7 million compounds generated by a very high performance computer (Blue Gene) requires new techniques for exploitation when extensive use of chemical similarity is involved. Such exploitation includes the taxonomic classification of chemical themes, and data mining to assess mutual information between themes and companies. Importantly, we also launch candidates that evolve by "natural selection" as failure of partial match against the patent data base and their ability to bind to the protein target appropriately, by simulation on Blue Gene. An unusual feature of our method is that algorithms and workflows rely on dynamic interaction between match-and-edit instructions, which in practice are regular expressions. Similarity testing by these uses SMILES strings and, less frequently, graph or connectivity representations. Examining how this performs in high throughput, we note that chemical similarity and novelty are human concepts that largely have meaning by utility in specific contexts. For some purposes, mutual information involving chemical themes might be a better concept.

  19. The mitochondrial genome of the leaf-cutter ant Atta laevigata: a mitogenome with a large number of intergenic spacers.

    Directory of Open Access Journals (Sweden)

    Cynara de Melo Rodovalho

    Full Text Available In this paper we describe the nearly complete mitochondrial genome of the leaf-cutter ant Atta laevigata, assembled using transcriptomic libraries from Sanger and Illumina next generation sequencing (NGS, and PCR products. This mitogenome was found to be very large (18,729 bp, given the presence of 30 non-coding intergenic spacers (IGS spanning 3,808 bp. A portion of the putative control region remained unsequenced. The gene content and organization correspond to that inferred for the ancestral pancrustacea, except for two tRNA gene rearrangements that have been described previously in other ants. The IGS were highly variable in length and dispersed through the mitogenome. This pattern was also found for the other hymenopterans in particular for the monophyletic Apocrita. These spacers with unknown function may be valuable for characterizing genome evolution and distinguishing closely related species and individuals. NGS provided better coverage than Sanger sequencing, especially for tRNA and ribosomal subunit genes, thus facilitating efforts to fill in sequence gaps. The results obtained showed that data from transcriptomic libraries contain valuable information for assembling mitogenomes. The present data also provide a source of molecular markers that will be very important for improving our understanding of genomic evolutionary processes and phylogenetic relationships among hymenopterans.

  20. Impact of an extremely large magnitude volcanic eruption on the global climate and carbon cycle estimated from ensemble Earth System Model simulations

    Directory of Open Access Journals (Sweden)

    J. Segschneider

    2012-07-01

    Full Text Available The response of the global climate-carbon cycle system to an extremely large Northern Hemisphere mid latitude volcanic eruption is investigated using ensemble integrations with the comprehensive Earth System Model MPI-ESM. The model includes dynamical compartments of the atmosphere and ocean and interactive modules of the terrestrial biosphere as well as ocean biogeochemistry. The MPI-ESM was forced with anomalies of aerosol optical depth and effective radius of aerosol particles corresponding to a super eruption of the Yellowstone volcanic system. The model experiment consists of an ensemble of fifteen model integrations that are started at different pre-ENSO states of a contol experiment and run for 200 yr after the volcanic eruption. The climate response to the volcanic eruption is a maximum global monthly mean surface air temperature cooling of 3.8 K for the ensemble mean and from 3.3 K to 4.3 K for individual ensemble members. Atmospheric pCO2 decreases by a maximum of 5 ppm for the ensemble mean and by 3 ppm to 7 ppm for individual ensemble members approximately 6 yr after the eruption. The atmospheric carbon content only very slowly returns to near pre-eruption level at year 200 after the eruption. The ocean takes up carbon shortly after the eruption in response to the cooling, changed wind fields, and ice cover. This physics driven uptake is weakly counteracted by a reduction of the biological export production mainly in the tropical Pacific. The land vegetation pool shows a distinct loss of carbon in the initial years after the eruption which has not been present in simulations of smaller scale eruptions. The gain of the soil carbon pool determines the amplitude of the CO2 perturbation and the long term behaviour of the overall system: an initial gain caused by reduced soil respiration is followed by a rather slow return towards pre-eruption levels. During this phase, the ocean compensates partly for the

  1. Impact of an extremely large magnitude volcanic eruption on the global climate and carbon cycle estimated from ensemble Earth System Model simulations

    Directory of Open Access Journals (Sweden)

    J. Segschneider

    2013-02-01

    Full Text Available The response of the global climate-carbon cycle system to an extremely large Northern Hemisphere mid-latitude volcanic eruption is investigated using ensemble integrations with the comprehensive Earth System Model MPI-ESM. The model includes dynamical compartments of the atmosphere and ocean and interactive modules of the terrestrial biosphere as well as ocean biogeochemistry. The MPI-ESM was forced with anomalies of aerosol optical depth and effective radius of aerosol particles corresponding to a super eruption of the Yellowstone volcanic system. The model experiment consists of an ensemble of fifteen model integrations that are started at different pre-ENSO states of a control experiment and run for 200 years after the volcanic eruption. The climate response to the volcanic eruption is a maximum global monthly mean surface air temperature cooling of 3.8 K for the ensemble mean and from 3.3 K to 4.3 K for individual ensemble members. Atmospheric pCO2 decreases by a maximum of 5 ppm for the ensemble mean and by 3 ppm to 7 ppm for individual ensemble members approximately 6 years after the eruption. The atmospheric carbon content only very slowly returns to near pre-eruption level at year 200 after the eruption. The ocean takes up carbon shortly after the eruption in response to the cooling, changed wind fields and ice cover. This physics-driven uptake is weakly counteracted by a reduction of the biological export production mainly in the tropical Pacific. The land vegetation pool shows a decrease by 4 GtC due to reduced short-wave radiation that has not been present in a smaller scale eruption. The gain of the soil carbon pool determines the amplitude of the CO2 perturbation and the long-term behaviour of the overall system: an initial gain caused by reduced soil respiration is followed by a rather slow return towards pre-eruption levels. During this phase, the ocean compensates partly for the reduced atmospheric

  2. Combined large field-of-view MRA and time-resolved MRA of the lower extremities: Impact of acquisition order on image quality

    Energy Technology Data Exchange (ETDEWEB)

    Riffel, Philipp, E-mail: Philipp.Riffel@umm.de [Institute of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University (Germany); Haneder, Stefan; Attenberger, Ulrike I. [Institute of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University (Germany); Brade, Joachim [Department of Medical Statistics and Biomathematics, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University (Germany); Schoenberg, Stefan O.; Michaely, Henrik J. [Institute of Clinical Radiology and Nuclear Medicine, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University (Germany)

    2012-10-15

    Purpose: Different approaches exist for hybrid MRA of the calf station. So far, the order of the acquisition of the focused calf MRA and the large field-of-view MRA has not been scientifically evaluated. Therefore the aim of this study was to evaluate if the quality of the combined large field-of-view MRA (CTM MR angiography) and time-resolved MRA with stochastic interleaved trajectories (TWIST MRA) depends on the order of acquisition of the two contrast-enhanced studies. Methods: In this retrospective study, 40 consecutive patients (mean age 68.1 ± 8.7 years, 29 male/11 female) who had undergone an MR angiographic protocol that consisted of CTM-MRA (TR/TE, 2.4/1.0 ms; 21° flip angle; isotropic resolution 1.2 mm; gadolinium dose, 0.07 mmol/kg) and TWIST-MRA (TR/TE 2.8/1.1; 20° flip angle; isotropic resolution 1.1 mm; temporal resolution 5.5 s, gadolinium dose, 0.03 mmol/kg), were included. In the first group (group 1) TWIST-MRA of the calf station was performed 1–2 min after CTM-MRA. In the second group (group 2) CTM-MRA was performed 1–2 min after TWIST-MRA of the calf station. The image quality of CTM-MRA and TWIST-MRA were evaluated by 2 two independent radiologists in consensus according to a 4-point Likert-like rating scale assessing overall image quality on a segmental basis. Venous overlay was assessed per examination. Results: In the CTM-MRA, 1360 segments were included in the assessment of image quality. CTM-MRA was diagnostic in 95% (1289/1360) of segments. There was a significant difference (p < 0.0001) between both groups with regard to the number of segments rated as excellent and moderate. The image quality was rated as excellent in group 1 in 80% (514/640 segments) and in group 2 in 67% (432/649), respectively (p < 0.0001). In contrast, the image quality was rated as moderate in the first group in 5% (33/640) and in the second group in 19% (121/649) respectively (p < 0.0001). The venous overlay was disturbing in 10% in group 1 and 20% in group

  3. The application of the boundary element method in BEM++ to small extreme Chebyshev ice particles and the remote detection of the ice crystal number concentration of small atmospheric ice particles

    Science.gov (United States)

    Baran, Anthony J.; Groth, Samuel P.

    2017-09-01

    The measurement of the shape and size distributions of small atmospheric ice particles (i.e. less than about 100 μm in size) is still an unresolved problem in atmospheric physics. This paper is composed of two parts, each addressing one of these measurements. In the first part, we report on an application of a new open-source electromagnetic boundary element method (BEM) called ;BEM++; to characterise the shape of small ice particles through the simulation of the two-dimensional (2D) light scattering patterns of extreme Chebyshev ice particles. Previous electromagnetic studies of Chebyshev particles have concentrated upon high Chebyshev orders, but with low Chebyshev deformation parameters. Here, we extend such studies by concentrating on the 2D light scattering properties of Chebyshev particles with extreme deformation parameters, up to 0.5, and with Chebyshev orders up to 16, at a size parameter of 15, in a fixed orientation. The results demonstrate the applicability of BEM++ to the study of the electromagnetic scattering properties of extreme particles and the usefulness of measuring the light scattering patterns of particles in 2D to mimic the scattering behaviours of highly irregular particles, such as dendritic atmospheric ice or hazardous biological and/or aerosol particles. In the second part, we demonstrate the potential application of remotely sensed very-high-resolution brightness temperature measurements of optically thin cirrus between wavelengths of about 8.0 and 12.0 μm to resolve the current atmospheric physics issue of determining the number concentration of small ice particles with size less than about 100 μm.

  4. Equivalent conditions of complete convergence for m-dimensional products of iid random variables and application to strong law of large numbers

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Under very weak condition 0<r(t)↑∞, t→∞, we obtain a series of equivalent conditions of complete convergence for maxima of m-dimensional products of iid random variables, which provide a useful tool for researching this class of questions. Some results on strong law of large numbers are given such that our results are much stronger than the corresponding result of Gadidov's.

  5. equivalent conditions of complete convergence for m-dimensional products of iid random variables and application to strong law of large numbers

    Institute of Scientific and Technical Information of China (English)

    王岳宝; 苏淳; 梁汉营; 成凤旸

    2000-01-01

    Under very weak condition 0 < r(t)↑∞ , t→∞. we obtain a series of equivalent conditions of complete convergence for maxima of m-dimensional products of iid random variables, which provide a useful tool for researching this class of questions. Some results on strong law of large numbers are given such that our results are much stronger than the corresponding result of Gadidov’s.

  6. Extreme cosmos

    CERN Document Server

    Gaensler, Bryan

    2011-01-01

    The universe is all about extremes. Space has a temperature 270°C below freezing. Stars die in catastrophic supernova explosions a billion times brighter than the Sun. A black hole can generate 10 million trillion volts of electricity. And hypergiants are stars 2 billion kilometres across, larger than the orbit of Jupiter. Extreme Cosmos provides a stunning new view of the way the Universe works, seen through the lens of extremes: the fastest, hottest, heaviest, brightest, oldest, densest and even the loudest. This is an astronomy book that not only offers amazing facts and figures but also re

  7. Low frequency of broadly neutralizing HIV antibodies during chronic infection even in quaternary epitope targeting antibodies containing large numbers of somatic mutations.

    Science.gov (United States)

    Hicar, Mark D; Chen, Xuemin; Kalams, Spyros A; Sojar, Hakimuddin; Landucci, Gary; Forthal, Donald N; Spearman, Paul; Crowe, James E

    2016-02-01

    Neutralizing antibodies (Abs) are thought to be a critical component of an appropriate HIV vaccine response. It has been proposed that Abs recognizing conformationally dependent quaternary epitopes on the HIV envelope (Env) trimer may be necessary to neutralize diverse HIV strains. A number of recently described broadly neutralizing monoclonal Abs (mAbs) recognize complex and quaternary epitopes. Generally, many such Abs exhibit extensive numbers of somatic mutations and unique structural characteristics. We sought to characterize the native antibody (Ab) response against circulating HIV focusing on such conformational responses, without a prior selection based on neutralization. Using a capture system based on VLPs incorporating cleaved envelope protein, we identified a selection of B cells that produce quaternary epitope targeting Abs (QtAbs). Similar to a number of broadly neutralizing Abs, the Ab genes encoding these QtAbs showed extensive numbers of somatic mutations. However, when expressed as recombinant molecules, these Abs failed to neutralize virus or mediate ADCVI activity. Molecular analysis showed unusually high numbers of mutations in the Ab heavy chain framework 3 region of the variable genes. The analysis suggests that large numbers of somatic mutations occur in Ab genes encoding HIV Abs in chronically infected individuals in a non-directed, stochastic, manner.

  8. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    Science.gov (United States)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  9. Ground pressure law of fully mechanized large cutting height face in extremely-soft thick seam and stability control in tip-to-face area

    Institute of Scientific and Technical Information of China (English)

    LIU Chang-you; CHANG Xing-min; HUANG Bing-xiang; WEI Min-tao; WANG Jun; WANG Jian-shu

    2007-01-01

    When stepped coal getting technology was applied to high seam mining working face, with field observations the following aspects of working face were analyzed based on the inherent conditions of extremely soft thick seam mined by Liangbei Mine, such as the brokenness and activity law of rock seam in the working face, the law of load-bearing of its supports, and the instability character of coal or rock in tip-to-face area.The following are the major laws. Pressure intensity of roof in high seam mining with extremely soft thick seam is stronger than one in slicing and sublevel-caving as a whole. But the greater crushing deformation of coal side makes pressure intensity of roof in the middle of working face be equivalent to one in sublevel-caving. In the middle of working face the roof brokenness has less dynamic load effect than roof brokenness in the two ends of working face. The brokenness instability of distinct pace of roof brings several load-bearings to supports. In condition of extremely soft thick seam, the ratio of resistance increment of supports in two ends of working face is obviously greater than that of supports in the middle. Most sloughing in coal side is triangular slop sloughing caused by shear slipping in high seam mining with extremely soft thick seam. Ultrahigh mining is the major reason for roof fall. Instability of coal or rock in tip-to-face area can be controlled effectively with the methods such as improving setting load of supports, mining along roof by reinforcing floor and protecting the immediate roof in time, and so on.

  10. Multiple-relaxation-time lattice Boltzmann model for incompressible miscible flow with large viscosity ratio and high Péclet number

    Science.gov (United States)

    Meng, Xuhui; Guo, Zhaoli

    2015-10-01

    A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.

  11. Targeted array comparative genomic hybridization--a new diagnostic tool for the detection of large copy number variations in nemaline myopathy-causing genes.

    Science.gov (United States)

    Kiiski, K; Laari, L; Lehtokari, V-L; Lunkka-Hytönen, M; Angelini, C; Petty, R; Hackman, P; Wallgren-Pettersson, C; Pelin, K

    2013-01-01

    Nemaline myopathy (NM) constitutes a heterogeneous group of congenital myopathies. Mutations in the nebulin gene (NEB) are the main cause of recessively inherited NM. NEB is one of the most largest genes in human. To date, 68 NEB mutations, mainly small deletions or point mutations have been published. The only large mutation characterized is the 2.5 kb deletion of exon 55 in the Ashkenazi Jewish population. To investigate any copy number variations in this enormous gene, we designed a novel custom comparative genomic hybridization microarray, NM-CGH, targeted towards the seven known genes causative for NM. During the validation of the NM-CGH array we identified two novel deletions in two different families. The first is the largest deletion characterized in NEB to date, (∼53 kb) encompassing 24 exons. The second deletion (1 kb) covers two exons. In both families, the copy number change was the second mutation to be characterized and shown to have been inherited from one of the healthy carrier parents. In addition to these novel mutations, copy number variation was identified in four samples in three families in the triplicate region of NEB. We conclude that this method appears promising for the detection of copy number variations in NEB. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. ProReg XL Tool: an easy-to-use computer tool suite for rapidly regrouping a large number of identical electrophoretic profiles.

    Science.gov (United States)

    Massias, Bastien; Urdaci, Maria C

    2009-05-01

    The ProReg XL Tool (Profile Regrouping Excel Tool) is a new tool suite designed to rapidly regroup a large number of identical electrophoretic profiles. This tool suite is coded in Visual Basic Application for Microsoft Excel, and thus requires this spreadsheet software to operate. It was designed for use with a new screening strategy of clones from an rrs (16S rDNA) clone library, but it may also be helpful in other electrophoretic applications. ProReg XL Tool is organized in different steps where the user has the capability--in addition to regrouping electrophoretic profiles--to control gel quality, determine signal attenuation, and draw pie charts.

  13. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  14. Lack of significant skin inflammation during elimination by apoptosis of large numbers of mouse cutaneous mast cells after cessation of treatment with stem cell factor.

    Science.gov (United States)

    Maurer, Marcus; Galli, Stephen J

    2004-12-01

    We previously reported that subcutaneous (s.c.) administration of stem cell factor (SCF), the ligand for the c-Kit receptor, to the back skin of mice promotes marked local increases in the numbers of cutaneous mast cells (MCs), and that cessation of SCF treatment results in the rapid reduction of cutaneous MC populations by apoptosis. In the present study, we used the 125I-fibrin deposition assay, a very sensitive method for quantifying increased vascular permeability, to assess whether the clearance of large numbers of apoptotic MCs is associated with significant cutaneous inflammation. The s.c. injection of rrSCF164 (30 or 100 microg/kg/day) or rrSCF164-peg (polyethylene glycol-treated SCF, 30 or 100 microg/kg/day) for 23 days increased the numbers of dermal MCs at skin injection sites from 5.1+/-0.7 MCs/mm2 to 36.4+/-4.1, 34.7+/-9.7, 52.5+/-5.8, and 545+/-97 MCs/mm2, respectively. In contrast, MC numbers were markedly lower in mice that had been treated with SCF for 21 days, followed by 2 days of injection with the vehicle alone. Notably, when tested during the period of rapid reduction of skin MCs,125I-fibrin deposition in the skin was very similar to that in mice receiving continuous treatment with SCF or vehicle. We conclude that the rapid elimination of even very large populations of MCs by apoptosis, which also results in the clearance of the considerable quantities of proinflammatory products stored by these cells, does not lead to significant local cutaneous inflammatory responses.

  15. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  16. Use of Two-Body Correlated Basis Functions with van der Waals Interaction to Study the Shape-Independent Approximation for a Large Number of Trapped Interacting Bosons

    Science.gov (United States)

    Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.

    2017-01-01

    We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7 ) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6 . We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.

  17. Global repeat discovery and estimation of genomic copy number in a large, complex genome using a high-throughput 454 sequence survey

    Directory of Open Access Journals (Sweden)

    Varala Kranthi

    2007-05-01

    Full Text Available Abstract Background Extensive computational and database tools are available to mine genomic and genetic databases for model organisms, but little genomic data is available for many species of ecological or agricultural significance, especially those with large genomes. Genome surveys using conventional sequencing techniques are powerful, particularly for detecting sequences present in many copies per genome. However these methods are time-consuming and have potential drawbacks. High throughput 454 sequencing provides an alternative method by which much information can be gained quickly and cheaply from high-coverage surveys of genomic DNA. Results We sequenced 78 million base-pairs of randomly sheared soybean DNA which passed our quality criteria. Computational analysis of the survey sequences provided global information on the abundant repetitive sequences in soybean. The sequence was used to determine the copy number across regions of large genomic clones or contigs and discover higher-order structures within satellite repeats. We have created an annotated, online database of sequences present in multiple copies in the soybean genome. The low bias of pyrosequencing against repeat sequences is demonstrated by the overall composition of the survey data, which matches well with past estimates of repetitive DNA content obtained by DNA re-association kinetics (Cot analysis. Conclusion This approach provides a potential aid to conventional or shotgun genome assembly, by allowing rapid assessment of copy number in any clone or clone-end sequence. In addition, we show that partial sequencing can provide access to partial protein-coding sequences.

  18. Comparative efficacy of tulathromycin versus a combination of florfenicol-oxytetracycline in the treatment of undifferentiated respiratory disease in large numbers of sheep

    Directory of Open Access Journals (Sweden)

    Mohsen Champour

    2015-09-01

    Full Text Available The objective of this study was to compare the efficacy of tulathromycin (TUL with a combination of florfenicol (FFC and long-acting oxytetracycline (LAOTC in the treatment of naturally occurring undifferentiated respiratory diseases in large numbers of sheep. In this study, seven natural outbreaks of sheep pneumonia in Garmsar, Iran were considered. From these outbreaks, 400 sheep exhibiting the signs of respiratory diseases were selected, and the sheep were randomly divided into two equal groups. The first group was treated with a single injection of TUL (dosed at 2.5 mg/kg body weight, and the second group was treated with concurrent injections of FFC (dosed at 40 mg/kg bwt and LAOTC (dosed at 20 mg/kg bwt. In the first group, 186 (93% sheep were found to be cured 5 days after the injection, and 14 (7% sheep needed further treatment, of which 6 (3% were cured, and 8 (4% died. In the second group, 172 (86% sheep were cured after the injections, but 28 (14% sheep needed further treatment, of which 10 (5% were cured, and 18 (9% died. This study revealed that TUL was more efficacious as compared to the combined treatment using FFC and LAOTC. As the first report, this field trial describes the successful treatment of undifferentiated respiratory diseases in large numbers of sheep. Thus, TUL can be used for the treatment of undifferentiated respiratory diseases in sheep. [J Adv Vet Anim Res 2015; 2(3.000: 279-284

  19. Study of 3-D Dynamic Roughness Effects on Flow Over a NACA 0012 Airfoil Using Large Eddy Simulations at Low Reynolds Numbers

    Science.gov (United States)

    Guda, Venkata Subba Sai Satish

    There have been several advancements in the aerospace industry in areas of design such as aerodynamics, designs, controls and propulsion; all aimed at one common goal i.e. increasing efficiency --range and scope of operation with lesser fuel consumption. Several methods of flow control have been tried. Some were successful, some failed and many were termed as impractical. The low Reynolds number regime of 104 - 105 is a very interesting range. Flow physics in this range are quite different than those of higher Reynolds number range. Mid and high altitude UAV's, MAV's, sailplanes, jet engine fan blades, inboard helicopter rotor blades and wind turbine rotors are some of the aerodynamic applications that fall in this range. The current study deals with using dynamic roughness as a means of flow control over a NACA 0012 airfoil at low Reynolds numbers. Dynamic 3-D surface roughness elements on an airfoil placed near the leading edge aim at increasing the efficiency by suppressing the effects of leading edge separation like leading edge stall by delaying or totally eliminating flow separation. A numerical study of the above method has been carried out by means of a Large Eddy Simulation, a mathematical model for turbulence in Computational Fluid Dynamics, owing to the highly unsteady nature of the flow. A user defined function has been developed for the 3-D dynamic roughness element motion. Results from simulations have been compared to those from experimental PIV data. Large eddy simulations have relatively well captured the leading edge stall. For the clean cases, i.e. with the DR not actuated, the LES was able to reproduce experimental results in a reasonable fashion. However DR simulation results show that it fails to reattach the flow and suppress flow separation compared to experiments. Several novel techniques of grid design and hump creation are introduced through this study.

  20. Cascaded lattice Boltzmann method with improved forcing scheme for large-density-ratio multiphase flow at high Reynolds and Weber numbers.

    Science.gov (United States)

    Lycett-Brown, Daniel; Luo, Kai H

    2016-11-01

    A recently developed forcing scheme has allowed the pseudopotential multiphase lattice Boltzmann method to correctly reproduce coexistence curves, while expanding its range to lower surface tensions and arbitrarily high density ratios [Lycett-Brown and Luo, Phys. Rev. E 91, 023305 (2015)PLEEE81539-375510.1103/PhysRevE.91.023305]. Here, a third-order Chapman-Enskog analysis is used to extend this result from the single-relaxation-time collision operator, to a multiple-relaxation-time cascaded collision operator, whose additional relaxation rates allow a significant increase in stability. Numerical results confirm that the proposed scheme enables almost independent control of density ratio, surface tension, interface width, viscosity, and the additional relaxation rates of the cascaded collision operator. This allows simulation of large density ratio flows at simultaneously high Reynolds and Weber numbers, which is demonstrated through binary collisions of water droplets in air (with density ratio up to 1000, Reynolds number 6200 and Weber number 440). This model represents a significant improvement in multiphase flow simulation by the pseudopotential lattice Boltzmann method in which real-world parameters are finally achievable.

  1. Cascaded lattice Boltzmann method with improved forcing scheme for large-density-ratio multiphase flow at high Reynolds and Weber numbers

    Science.gov (United States)

    Lycett-Brown, Daniel; Luo, Kai H.

    2016-11-01

    A recently developed forcing scheme has allowed the pseudopotential multiphase lattice Boltzmann method to correctly reproduce coexistence curves, while expanding its range to lower surface tensions and arbitrarily high density ratios [Lycett-Brown and Luo, Phys. Rev. E 91, 023305 (2015), 10.1103/PhysRevE.91.023305]. Here, a third-order Chapman-Enskog analysis is used to extend this result from the single-relaxation-time collision operator, to a multiple-relaxation-time cascaded collision operator, whose additional relaxation rates allow a significant increase in stability. Numerical results confirm that the proposed scheme enables almost independent control of density ratio, surface tension, interface width, viscosity, and the additional relaxation rates of the cascaded collision operator. This allows simulation of large density ratio flows at simultaneously high Reynolds and Weber numbers, which is demonstrated through binary collisions of water droplets in air (with density ratio up to 1000, Reynolds number 6200 and Weber number 440). This model represents a significant improvement in multiphase flow simulation by the pseudopotential lattice Boltzmann method in which real-world parameters are finally achievable.

  2. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  3. Emission rate estimates determined for a large number of volatile organic compounds using airborne measurements for the oil sands facilities in Alberta, Canada

    Science.gov (United States)

    Li, S. M.; Leithead, A.; Moussa, S.; Liggio, J.; Moran, M. D.; Wang, D. K.; Hayden, K. L.; Darlington, A.; Gordon, M.; Staebler, R. M.; Makar, P.; Stroud, C.; McLaren, R.; Liu, P.; O'brien, J.; Mittermeier, R. L.; Zhang, J.; Marson, G.; Cober, S.; Wolde, M.; Wentzell, J.

    2016-12-01

    In August and September of 2013, aircraft-based measurements of air pollutants were made during a field campaign in support of the Joint Canada-Alberta Implementation Plan on Oil Sands Monitoring in Alberta, Canada. Volatile organic compounds (VOCs) were determined using a high resolution proton-transfer-reaction time-of-flight mass spectrometer (PTR-ToF-MS) continuously at 2-5 second resolution during the flights, and from 680 discretely sampled stainless steel canisters collected during flights followed by offline GC-MS and GC-FID analyses for four large oil sands surface mining facilities. The Top-down Emission Rate Retrieval Algorithm (TERRA), developed at Environment and Climate Change Canada (ECCC), was applied to the aromatics and oxygenated VOC results from the PTR-ToF-MS to determine their emission rates. Additional VOC species, determined in the canisters, were compared with the PTR-ToF-MS VOC species to determine their emission ratios. Using these emission ratios and the emission rates for the aromatics and oxygenated VOCs, the individual emission rates for 73-90 volatile organic compounds (VOCs) were determined for each of the four major oil sands facilities. The results are the first independently determined emission rates for a large number of VOCs at the same time for large industrial complexes such as the oil sands mining facilities. These measurement-based emission data will be important for strengthening VOC emission reporting.

  4. A probable vacuum state containing a large number of hydrogen atom of excited state or ground state K, Rb or Cs atom

    CERN Document Server

    You, Pei-Lin

    2008-01-01

    The linear Stark effect shows that the first excited state of hydrogen atom has large permanent electric dipole moment (EDM), d(H)=3eao (ao is Bohr radius). Using special capacitors our experiments discovered that the ground state K, Rb or Cs atom is polar atom with a large EDM of the order of eao as hydrogen atom of excited state. Their capacitance(C) at different voltage (V) was measured. The C-V curve shows that the saturation polarization of K, Rb or Cs vapor has be observed when the field E more than ten to the fifth power V/m. When the saturation polarization appeared, nearly all K, Rb or Cs atoms(more than 98 percent) turned toward the direction of the field, and C is approximately equal to Co (Co is vacuum capacitance) or their dielectric constant is nearly the same as vacuum! K, Rb or Cs vapor just exist in the lowest energy state, so we see the vacuum state containing a large number of atoms! Due to the saturation polarization of hydrogen vapor of excited state is easily appears, we conjecture that ...

  5. Long-term changes in nutrients and mussel stocks are related to numbers of breeding eiders Somateria mollissima at a large Baltic colony.

    Directory of Open Access Journals (Sweden)

    Karsten Laursen

    Full Text Available BACKGROUND: The Baltic/Wadden Sea eider Somateria mollissima flyway population is decreasing, and this trend is also reflected in the large eider colony at Christiansø situated in the Baltic Sea. This colony showed a 15-fold increase from 1925 until the mid-1990's, followed by a rapid decline in recent years, although the causes of this trend remain unknown. Most birds from the colony winter in the Wadden Sea, from which environmental data and information on the size of the main diet, the mussel Mytilus edulis stock exists. We hypothesised that changes in nutrients and water temperature in the Wadden Sea had an effect on the ecosystem affecting the size of mussel stocks, the principal food item for eiders, thereby influencing the number of breeding eider in the Christiansø colony. METHODOLOGY/PRINCIPAL FINDING: A positive relationship between the amount of fertilizer used by farmers and the concentration of phosphorus in the Wadden Sea (with a time lag of one year allowed analysis of the predictions concerning effects of nutrients for the period 1925-2010. There was (1 increasing amounts of fertilizer used in agriculture and this increased the amount of nutrients in the marine environment thereby increasing the mussel stocks in the Wadden Sea. (2 The number of eiders at Christiansø increased when the amount of fertilizer increased. Finally (3 the number of eiders in the colony at Christiansø increased with the amount of mussel stocks in the Wadden Sea. CONCLUSIONS/SIGNIFICANCE: The trend in the number of eiders at Christiansø is representative for the entire flyway population, and since nutrient reduction in the marine environment occurs in most parts of Northwest Europe, we hypothesize that this environmental candidate parameter is involved in the overall regulation of the Baltic/Wadden Sea eider population during recent decades.

  6. Lower extremity injury in female basketball players is related to a large difference in peak eversion torque between barefoot and shod conditions

    Directory of Open Access Journals (Sweden)

    Jennifer M. Yentes

    2014-09-01

    Conclusion: It is possible that a large discrepancy between strength in barefoot and shod conditions can predispose an athlete to injury. Narrowing the difference in peak eversion torque between barefoot and shod could decrease propensity to injury. Future work should investigate the effect of restoration of muscular strength during barefoot and shod exercise on injury rates.

  7. A rapid and robust sequence-based genotyping method for BoLA-DRB3 alleles in large numbers of heterozygous cattle.

    Science.gov (United States)

    Baxter, R; Hastings, N; Law, A; Glass, E J

    2008-10-01

    The BoLA-DRB3 gene is a highly polymorphic major histocompatibility complex class II gene of cattle with over one hundred alleles reported. Most of the polymorphisms are located in exon 2, which encodes the peptide-binding cleft, and these sequence differences play a role in variability of immune responsiveness and disease resistance. However, the high degree of polymorphism in exon 2 leads to difficulty in accurately genotyping cattle, especially heterozygous animals. In this study, we have improved and simplified an earlier sequence-based typing method to easily and reliably genotype cattle for BoLA-DRB3. In contrast to the earlier method, which used a nested primer set to amplify exon 2 followed by sequencing with internal primers, the new method uses only internal primers for both amplification and sequencing, which results in high-quality sequence across the entire exon. The haplofinder software, which assigns alleles from the heterozygous sequence, now has a pre-processing step that uses a consensus of all known alleles and checks for errors in base calling, thus improving the ability to process large numbers of samples. In addition, advances in sequencing technology have reduced the requirement for manual editing and improved the clarity of heterozygous base calls, resulting in longer and clearer sequence reads. Taken together, this has resulted in a rapid and robust method for genotyping large numbers of heterozygous samples for BoLA-DRB3 polymorphisms. Over 400 Holstein-Charolais cattle have now been genotyped for BoLA-DRB3 using this approach.

  8. A study of the effectiveness of machine learning methods for classification of clinical interview fragments into a large number of categories.

    Science.gov (United States)

    Hasan, Mehedi; Kotov, Alexander; Idalski Carcone, April; Dong, Ming; Naar, Sylvie; Brogan Hartlieb, Kathryn

    2016-08-01

    This study examines the effectiveness of state-of-the-art supervised machine learning methods in conjunction with different feature types for the task of automatic annotation of fragments of clinical text based on codebooks with a large number of categories. We used a collection of motivational interview transcripts consisting of 11,353 utterances, which were manually annotated by two human coders as the gold standard, and experimented with state-of-art classifiers, including Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), Random Forest (RF), AdaBoost, DiscLDA, Conditional Random Fields (CRF) and Convolutional Neural Network (CNN) in conjunction with lexical, contextual (label of the previous utterance) and semantic (distribution of words in the utterance across the Linguistic Inquiry and Word Count dictionaries) features. We found out that, when the number of classes is large, the performance of CNN and CRF is inferior to SVM. When only lexical features were used, interview transcripts were automatically annotated by SVM with the highest classification accuracy among all classifiers of 70.8%, 61% and 53.7% based on the codebooks consisting of 17, 20 and 41 codes, respectively. Using contextual and semantic features, as well as their combination, in addition to lexical ones, improved the accuracy of SVM for annotation of utterances in motivational interview transcripts with a codebook consisting of 17 classes to 71.5%, 74.2%, and 75.1%, respectively. Our results demonstrate the potential of using machine learning methods in conjunction with lexical, semantic and contextual features for automatic annotation of clinical interview transcripts with near-human accuracy.

  9. How do OSS projects change in number and size? A large-scale analysis to test a model of project growth

    CERN Document Server

    Schweitzer, Frank; Tessone, Claudio J; Xia, Xi

    2015-01-01

    Established Open Source Software (OSS) projects can grow in size if new developers join, but also the number of OSS projects can grow if developers choose to found new projects. We discuss to what extent an established model for firm growth can be applied to the dynamics of OSS projects. Our analysis is based on a large-scale data set from SourceForge (SF) consisting of monthly data for 10 years, for up to 360'000 OSS projects and up to 340'000 developers. Over this time period, we find an exponential growth both in the number of projects and developers, with a remarkable increase of single-developer projects after 2009. We analyze the monthly entry and exit rates for both projects and developers, the growth rate of established projects and the monthly project size distribution. To derive a prediction for the latter, we use modeling assumptions of how newly entering developers choose to either found a new project or to join existing ones. Our model applies only to collaborative projects that are deemed to gro...

  10. Quiescent Galaxies in the 3D-HST Survey: Spectroscopic Confirmation of a Large Number of Galaxies With Relatively Old Stellar Populations at z Approx. 2

    Science.gov (United States)

    Tease, Katherine Whitaker; vanDokkum, Pieter G.; Brammer, Gabriel; Momcheva, Ivelina; Skelton, Rosalind; Franx, Marijin; Kriek, Mariska; Labbe, Ivo; Fumagalli, Mattia; Lundgren, Britt F.; Nelson, Erica J.; Patel, Shannon G.; Rix, Hans-Walter

    2013-01-01

    Quiescent galaxies at z approx. 2 have been identified in large numbers based on rest-frame colors, but only a small number of these galaxies have been spectroscopically confirmed to show that their rest-frame optical spectra show either strong Balmer or metal absorption lines. Here, we median stack the rest-frame optical spectra for 171 photometrically quiescent galaxies at 1.4 populations already existed when the universe was approx. 3 Gyr old, and that rest-frame color selection techniques can efficiently select them. We find an average age of 1.3+0.10.3 Gyr when fitting a simple stellar population to the entire stack. We confirm our previous result from medium-band photometry that the stellar age varies with the colors of quiescent galaxies: the reddest 80 of galaxies are dominated by metal lines and have a relatively old mean age of 1.6+0.50.4 Gyr, whereas the bluest (and brightest) galaxies have strong Balmer lines and a spectroscopic age of 0.9+0.20.1 Gyr. Although the spectrum is dominated by an evolved stellar population, we also find [O iii] and H emission. Interestingly, this emission is more centrally concentrated than the continuum with LOiii = 1.7+/- 0.3 x 10(exp 40) erg/s, indicating residual central star formation or nuclear activity.

  11. A large scale survey reveals that chromosomal copy-number alterations significantly affect gene modules involved in cancer initiation and progression

    Directory of Open Access Journals (Sweden)

    Cigudosa Juan C

    2011-05-01

    Full Text Available Abstract Background Recent observations point towards the existence of a large number of neighborhoods composed of functionally-related gene modules that lie together in the genome. This local component in the distribution of the functionality across chromosomes is probably affecting the own chromosomal architecture by limiting the possibilities in which genes can be arranged and distributed across the genome. As a direct consequence of this fact it is therefore presumable that diseases such as cancer, harboring DNA copy number alterations (CNAs, will have a symptomatology strongly dependent on modules of functionally-related genes rather than on a unique "important" gene. Methods We carried out a systematic analysis of more than 140,000 observations of CNAs in cancers and searched by enrichments in gene functional modules associated to high frequencies of loss or gains. Results The analysis of CNAs in cancers clearly demonstrates the existence of a significant pattern of loss of gene modules functionally related to cancer initiation and progression along with the amplification of modules of genes related to unspecific defense against xenobiotics (probably chemotherapeutical agents. With the extension of this analysis to an Array-CGH dataset (glioblastomas from The Cancer Genome Atlas we demonstrate the validity of this approach to investigate the functional impact of CNAs. Conclusions The presented results indicate promising clinical and therapeutic implications. Our findings also directly point out to the necessity of adopting a function-centric, rather a gene-centric, view in the understanding of phenotypes or diseases harboring CNAs.

  12. Quiescent Galaxies in the 3D-HST Survey: Spectroscopic Confirmation of a Large Number of Galaxies with Relatively Old Stellar Populations at z~2

    CERN Document Server

    Whitaker, Katherine E; Brammer, Gabriel; Momcheva, Ivelina G; Skelton, Rosalind; Franx, Marijn; Kriek, Mariska; Labbe, Ivo; Fumagalli, Mattia; Lundgren, Britt F; Nelson, Erica J; Patel, Shannon G; Rix, Hans-Walter

    2013-01-01

    Quiescent galaxies at z~2 have been identified in large numbers based on rest-frame colors, but only a small number of these galaxies have been spectroscopically confirmed to show that their rest-frame optical spectra show either strong Balmer or metal absorption lines. Here, we median stack the rest-frame optical spectra for 171 photometrically-quiescent galaxies at 1.4 < z < 2.2 from the 3D-HST grism survey. In addition to Hbeta (4861A), we unambiguously identify metal absorption lines in the stacked spectrum, including the G-band (4304A), Mg I (5175A), and Na I (5894A). This finding demonstrates that galaxies with relatively old stellar populations already existed when the universe was ~3 Gyr old, and that rest-frame color selection techniques can efficiently select them. We find an average age of 1.3^0.1_0.3 Gyr when fitting a simple stellar population to the entire stack. We confirm our previous result from medium-band photometry that the stellar age varies with the colors of quiescent galaxies: th...

  13. Three-Dimensional Interaction of a Large Number of Dense DEP Particles on a Plane Perpendicular to an AC Electrical Field

    Directory of Open Access Journals (Sweden)

    Chuanchuan Xie

    2017-01-01

    Full Text Available The interaction of dielectrophoresis (DEP particles in an electric field has been observed in many experiments, known as the “particle chains phenomenon”. However, the study in 3D models (spherical particles is rarely reported due to its complexity and significant computational cost. In this paper, we employed the iterative dipole moment (IDM method to study the 3D interaction of a large number of dense DEP particles randomly distributed on a plane perpendicular to a uniform alternating current (AC electric field in a bounded or unbounded space. The numerical results indicated that the particles cannot move out of the initial plane. The similar particles (either all positive or all negative DEP particles always repelled each other, and did not form a chain. The dissimilar particles (a mixture of positive and negative DEP particles always attracted each other, and formed particle chains consisting of alternately arranged positive and negative DEP particles. The particle chain patterns can be randomly multitudinous depending on the initial particle distribution, the electric properties of particles/fluid, the particle sizes and the number of particles. It is also found that the particle chain patterns can be effectively manipulated via tuning the frequency of the AC field and an almost uniform distribution of particles in a bounded plane chip can be achieved when all of the particles are similar, which may have potential applications in the particle manipulation of microfluidics.

  14. Management of penetrating injuries of the upper extremities

    NARCIS (Netherlands)

    O.J.F. van Waes (Oscar); P.H. Navsaria; R.C. Verschuren (Renske Cm); L.C. Vroon (Laurens); E.M.M. van Lieshout (Esther); J.A. Halm (Jens); A.J. Nicol; J. Vermeulen (Jefrey)

    2013-01-01

    textabstractBackground: Routine surgical exploration after penetrating upper extremity trauma (PUET) to exclude arterial injury leads to a large number of negative explorations and iatrogenic injuries. Selective non-operative management (SNOM) is gaining in favor for patients with PUET. The present

  15. An Extreme Analogue of ɛ Aurigae: An M-giant Eclipsed Every 69 Years by a Large Opaque Disk Surrounding a Small Hot Source

    Science.gov (United States)

    Rodriguez, Joseph E.; Stassun, Keivan G.; Lund, Michael B.; Siverd, Robert J.; Pepper, Joshua; Tang, Sumin; Kafka, Stella; Gaudi, B. Scott; Conroy, Kyle E.; Beatty, Thomas G.; Stevens, Daniel J.; Shappee, Benjamin J.; Kochanek, Christopher S.

    2016-05-01

    , this system is poised to become an exemplar of a very rare class of systems, even more extreme in several respects than the well studied archetype ɛ Aurigae.

  16. Extremely large and hot multilayer Keplerian disk around the O-type protostar W51N: The precursors of the HCHII regions?

    CERN Document Server

    Zapata, Luis A; Leurini, Silvia

    2010-01-01

    We present sensitive high angular resolution (0.57$''$-0.78$''$) SO, SO$_2$, CO, C$_2$H$_5$OH, HC$_3$N, and HCOCH$_2$OH line observations at millimeter and submillimeter wavelengths of the young O-type protostar W51 North made with the Submillimeter Array (SMA). We report the presence of a large (of about 8000 AU) and hot molecular circumstellar disk around this object, which connects the inner dusty disk with the molecular ring or toroid reported recently, and confirms the existence of a single bipolar outflow emanating from this object. The molecular emission from the large disk is observed in layers with the transitions characterized by high excitation temperatures in their lower energy states (up to 1512 K) being concentrated closer to the central massive protostar. The molecular emission from those transitions with low or moderate excitation temperatures are found in the outermost parts of the disk and exhibits an inner cavity with an angular size of around 0.7$''$. We modeled all lines with a Local Ther...

  17. Technique challenges in coupling of high resolution spectrograph with extremely large telescope%高分辨率光谱仪与极大望远镜耦合问题分析

    Institute of Scientific and Technical Information of China (English)

    张弛; 朱永田; 张凯

    2014-01-01

    We reviewed the designing of several international ground-based extremely large opti-cal/infrared telescopes and introduced the problems faced in the coupling of high resolution spectrograph with telescopes of extremely large aperture .It is proposed that large area of ech-elle and ultrafast focal ratio camera can serve as a solution .According to the coupling rule of the spectrogragh and the telescope ,the diameter of collimated beam for a 30 m telescope would be over 70 cm ,and the size of the main dispersion echelle grating would be larger than 2 m2 . To build such huge and costly equipment would be difficult with current techniques .And large aperture camera with focal ratio F/0 .5 is also hard to design and manufacture .Image slicer , mosaic gratings and w hite pupil optic become major solutions in designing the high resolution spectrograph for an extremely large aperture telescope .%介绍国际上地面极大光学/红外望远镜的研制概况,分析高分辨率光谱仪与极大口径望远镜耦合中的难题,结果表明极大口径望远镜需要超大面积阶梯光栅和超快焦比相机。根据光谱仪与望远镜的匹配关系,30 m级极大口径望远镜的高分辨率光谱仪的准直光束将大于70 cm ,主色散阶梯光栅的面积大于2m2,照相机的焦比 F/0.5,按照目前的制造技术无法提供上述光栅和相机,因此,提出高分辨率光谱仪与极大望远镜进行耦合的技术。针对耦合问题给出了相应解决方案,即采用像切分器、拼接光栅以及白瞳设计等技术将是极大口径望远镜与高分辨率光谱仪耦合的主要解决方案。

  18. Interplanetary shocks and solar wind extremes

    Science.gov (United States)

    Vats, Hari

    The interplanetary shocks have a very high correlation with the annual sunspot numbers during the solar cycle; however the correlation falls very low on shorter time scale. Thus poses questions and difficulty in the predictability. Space weather is largely controlled by these interplanetary shocks, solar energetic events and the extremes of solar wind. In fact most of the solar wind extremes are related to the solar energetic phenomena. It is quite well understood that the energetic events like flares, filament eruptions etc. occurring on the Sun produce high speed extremes both in terms of density and speed. There is also high speed solar wind steams associated with the coronal holes mainly because the magnetic field lines are open there and the solar plasma finds it easy to escape from there. These are relatively tenuous high speed streams and hence create low intensity geomagnetic storms of higher duration. The solar flares and/or filament eruptions usually release excess coronal mass into the interplanetary medium and thus these energetic events send out high density and high speed solar wind which statistically found to produce more intense storms. The other extremes of solar wind are those in which density and speed are much lower than the normal values. Several such events have been observed and are found to produce space weather consequences of different kind. It is found that such extremes are more common around the maximum of solar cycle 20 and 23. Most of these have significantly low Alfven Mach number. This article is intended to outline the interplanetary and geomagnetic consequences of observed by ground based and satellite systems for the solar wind extremes.

  19. Regional anesthesia for an upper extremity amputation for palliative care in a patient with end-stage osteosarcoma complicated by a large anterior mediastinal mass.

    Science.gov (United States)

    Hakim, Mumin; Burrier, Candice; Bhalla, Tarun; Raman, Vidya T; Martin, David P; Dairo, Olamide; Mayerson, Joel L; Tobias, Joseph D

    2015-01-01

    Tumor progression during end-of-life care can lead to significant pain, which at times may be refractory to routine analgesic techniques. Although regional anesthesia is commonly used for postoperative pain care, there is limited experience with its use during home hospice care. We present a 24-year-old male with end-stage metastatic osteosarcoma who required anesthetic care for a right-sided above-the-elbow amputation. The anesthetic management was complicated by the presence of a large mediastinal mass, limited pulmonary reserve, and severe chronic pain with a high preoperative opioid requirement. Intraoperative anesthesia and postoperative pain management were provided by regional anesthesia using an interscalene catheter. He was discharged home with the interscalene catheter in place with a continuous local anesthetic infusion that allowed weaning of his chronic opioid medications and the provision of effective pain control. The perioperative applications of regional anesthesia in palliative and home hospice care are discussed.

  20. Is the intensification of precipitation extremes with global warming better detected at hourly than daily resolutions?

    Science.gov (United States)

    Barbero, R.; Fowler, H. J.; Lenderink, G.; Blenkinsop, S.

    2017-01-01

    Although it has been documented that daily precipitation extremes are increasing worldwide, faster increases may be expected for subdaily extremes. Here after a careful quality control procedure, we compared trends in hourly and daily precipitation extremes using a large network of stations across the United States (U.S.) within the 1950-2011 period. A greater number of significant increasing trends in annual and seasonal maximum precipitation were detected from daily extremes, with the primary exception of wintertime. Our results also show that the mean percentage change in annual maximum daily precipitation across the U.S. per global warming degree is 6.9% °C-1 (in agreement with the Clausius-Clapeyron rate) while lower sensitivities were observed for hourly extremes, suggesting that changes in the magnitude of subdaily extremes in response to global warming emerge more slowly than those for daily extremes in the climate record.