WorldWideScience

Sample records for extremely large number

  1. Large Extremity Peripheral Nerve Repair

    Science.gov (United States)

    2016-12-01

    LM, de Crombrugghe B. Some recent advances in the chemistry and biology of trans- forming growth factor-beta. J Cell Biol 1987;105:1039e45. 12. Hao Y...SUPPLEMENTARY NOTES 14. ABSTRACT In current war trauma, 20-30% of all extremity injuries and >80% of penetrating injuries being associated with peripheral nerve...through both axonal advance and in revascularization of the graft following placement. We are confident that this technology may allow us to

  2. Large Extremity Peripheral Nerve Repair

    Science.gov (United States)

    2016-12-01

    These antimicrobial peptides are implicated in the resistance of epithelial surfaces to microbial colonisation and have been shown to be upregulated...be equivalent to standard autograft repair in rodent models. Outcomes have now been validated in a large animal (swine) model with 5 cm ulnar nerve...Goals of the Project Task 1– Determine mechanical properties, seal strength and resistance to biodegradation of candidate photochemical nerve wrap

  3. Large number discrimination by mosquitofish.

    Directory of Open Access Journals (Sweden)

    Christian Agrillo

    Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all

  4. European Extremely Large Telescope: progress report

    Science.gov (United States)

    Tamai, R.; Spyromilio, J.

    2014-07-01

    The European Extremely Large Telescope is a project of the European Southern Observatory to build and operate a 40-m class optical near-infrared telescope. The telescope design effort is largely concluded and construction contracts are being placed with industry and academic/research institutes for the various components. The siting of the telescope in Northern Chile close to the Paranal site allows for an integrated operation of the facility providing significant economies. The progress of the project in various areas is presented in this paper and references to other papers at this SPIE meeting are made.

  5. Baryon number nonconservation in extreme conditions

    International Nuclear Information System (INIS)

    Matveev, V.A.; Rubakov, V.A.; Tavkhelidze, A.N.; Shaposhnikov, M.E.

    1988-01-01

    In gauge theories with the left-right asymmetric fermionic content (e.g. in standard electroweak theory) fermion number F is not conserved due to the anomaly. It is shown that anomalous processes, while being exponentially suppressed, under normal conditions, are in fact rapid. The mechanism of fermionic number nonconservation connected with a level crossing phenomenon in external gauge fields is described. The theory and experimental consequences of monopole catalysis of a proton decay is reviewed. It is shown that cold dense fermionic matter is stable only up to some limiting density. It is demonstrated that there is no exponential suppression of the rate F nonconservation at high temperatures. The cosmological implications of this fact are discussed. The strong anomalous fermionic number violation in decays of superheavy fermions technibaryons is considered

  6. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  7. Thermal convection for large Prandtl numbers

    NARCIS (Netherlands)

    Grossmann, Siegfried; Lohse, Detlef

    2001-01-01

    The Rayleigh-Bénard theory by Grossmann and Lohse [J. Fluid Mech. 407, 27 (2000)] is extended towards very large Prandtl numbers Pr. The Nusselt number Nu is found here to be independent of Pr. However, for fixed Rayleigh numbers Ra a maximum in the Nu(Pr) dependence is predicted. We moreover offer

  8. Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions

    Science.gov (United States)

    Stout, John Eldon

    Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of

  9. Large number discrimination in newborn fish.

    Directory of Open Access Journals (Sweden)

    Laura Piffer

    Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.

  10. Extreme Value Theory Applied to the Millennial Sunspot Number Series

    Science.gov (United States)

    Acero, F. J.; Gallego, M. C.; García, J. A.; Usoskin, I. G.; Vaquero, J. M.

    2018-01-01

    In this work, we use two decadal sunspot number series reconstructed from cosmogenic radionuclide data (14C in tree trunks, SN 14C, and 10Be in polar ice, SN 10Be) and the extreme value theory to study variability of solar activity during the last nine millennia. The peaks-over-threshold technique was used to compute, in particular, the shape parameter of the generalized Pareto distribution for different thresholds. Its negative value implies an upper bound of the extreme SN 10Be and SN 14C timeseries. The return level for 1000 and 10,000 years were estimated leading to values lower than the maximum observed values, expected for the 1000 year, but not for the 10,000 year return levels, for both series. A comparison of these results with those obtained using the observed sunspot numbers from telescopic observations during the last four centuries suggests that the main characteristics of solar activity have already been recorded in the telescopic period (from 1610 to nowadays) which covers the full range of solar variability from a Grand minimum to a Grand maximum.

  11. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  12. Estimation of the number of extreme pathways for metabolic networks

    Directory of Open Access Journals (Sweden)

    Thiele Ines

    2007-09-01

    Full Text Available Abstract Background The set of extreme pathways (ExPa, {pi}, defines the convex basis vectors used for the mathematical characterization of the null space of the stoichiometric matrix for biochemical reaction networks. ExPa analysis has been used for a number of studies to determine properties of metabolic networks as well as to obtain insight into their physiological and functional states in silico. However, the number of ExPas, p = |{pi}|, grows with the size and complexity of the network being studied, and this poses a computational challenge. For this study, we investigated the relationship between the number of extreme pathways and simple network properties. Results We established an estimating function for the number of ExPas using these easily obtainable network measurements. In particular, it was found that log [p] had an exponential relationship with log⁡[∑i=1Rd−id+ici] MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacyGGSbaBcqGGVbWBcqGGNbWzdaWadaqaamaaqadabaGaemizaq2aaSbaaSqaaiabgkHiTmaaBaaameaacqWGPbqAaeqaaaWcbeaakiabdsgaKnaaBaaaleaacqGHRaWkdaWgaaadbaGaemyAaKgabeaaaSqabaGccqWGJbWydaWgaaWcbaGaemyAaKgabeaaaeaacqWGPbqAcqGH9aqpcqaIXaqmaeaacqWGsbGua0GaeyyeIuoaaOGaay5waiaaw2faaaaa@4414@, where R = |Reff| is the number of active reactions in a network, d−i MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaacqWGKbazdaWgaaWcbaGaeyOeI0YaaSbaaWqaaiabdMgaPbqabaaaleqaaaaa@30A9@ and d+i MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb

  13. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  14. Extremes in Otolaryngology Resident Surgical Case Numbers: An Update.

    Science.gov (United States)

    Baugh, Tiffany P; Franzese, Christine B

    2017-06-01

    Objectives The purpose of this study is to examine the effect of minimum case numbers on otolaryngology resident case log data and understand differences in minimum, mean, and maximum among certain procedures as a follow-up to a prior study. Study Design Cross-sectional survey using a national database. Setting Academic otolaryngology residency programs. Subjects and Methods Review of otolaryngology resident national data reports from the Accreditation Council for Graduate Medical Education (ACGME) resident case log system performed from 2004 to 2015. Minimum, mean, standard deviation, and maximum values for total number of supervisor and resident surgeon cases and for specific surgical procedures were compared. Results The mean total number of resident surgeon cases for residents graduating from 2011 to 2015 ranged from 1833.3 ± 484 in 2011 to 2072.3 ± 548 in 2014. The minimum total number of cases ranged from 826 in 2014 to 1004 in 2015. The maximum total number of cases increased from 3545 in 2011 to 4580 in 2015. Multiple key indicator procedures had less than the required minimum reported in 2015. Conclusion Despite the ACGME instituting required minimum numbers for key indicator procedures, residents have graduated without meeting these minimums. Furthermore, there continues to be large variations in the minimum, mean, and maximum numbers for many procedures. Variation among resident case numbers is likely multifactorial. Ensuring proper instruction on coding and case role as well as emphasizing frequent logging by residents will ensure programs have the most accurate data to evaluate their case volume.

  15. Report from the 4th Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2011-02-01

    Full Text Available Academic and industrial users are increasingly facing the challenge of petabytes of data, but managing and analyzing such large data sets still remains a daunting task. The 4th Extremely Large Databases workshop was organized to examine the needs of communities under-represented at the past workshops facing these issues. Approaches to big data statistical analytics as well as emerging opportunities related to emerging hardware technologies were also debated. Writable extreme scale databases and the science benchmark were discussed. This paper is the final report of the discussions and activities at this workshop.

  16. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  17. Modified large number theory with constant G

    International Nuclear Information System (INIS)

    Recami, E.

    1983-01-01

    The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic

  18. Imaging extrasolar planets with the European Extremely Large Telescope

    Directory of Open Access Journals (Sweden)

    Jolissaint L.

    2011-07-01

    Full Text Available The European Extremely Large Telescope (E-ELT is the most ambitious of the ELTs being planned. With a diameter of 42 m and being fully adaptive from the start, the E-ELT will be more than one hundred times more sensitive than the present-day largest optical telescopes. Discovering and characterising planets around other stars will be one of the most important aspects of the E-ELT science programme. We model an extreme adaptive optics instrument on the E-ELT. The resulting contrast curves translate to the detectability of exoplanets.

  19. Laws of small numbers extremes and rare events

    CERN Document Server

    Falk, Michael; Hüsler, Jürg

    2004-01-01

    Since the publication of the first edition of this seminar book in 1994, the theory and applications of extremes and rare events have enjoyed an enormous and still increasing interest. The intention of the book is to give a mathematically oriented development of the theory of rare events underlying various applications. This characteristic of the book was strengthened in the second edition by incorporating various new results on about 130 additional pages. Part II, which has been added in the second edition, discusses recent developments in multivariate extreme value theory. Particularly notable is a new spectral decomposition of multivariate distributions in univariate ones which makes multivariate questions more accessible in theory and practice. One of the most innovative and fruitful topics during the last decades was the introduction of generalized Pareto distributions in the univariate extreme value theory. Such a statistical modelling of extremes is now systematically developed in the multivariate fram...

  20. Report from the 3rd Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2010-02-01

    Full Text Available Academic and industrial users are increasingly facing the challenge of petabytes of data, but managing and analyzing such large data sets still remains a daunting task. Both the database and the map/reduce communities worldwide are working on addressing these issues. The 3rdExtremely Large Databases workshop was organized to examine the needs of scientific communities beginning to face these issues, to reach out to European communities working on extremely large scale data challenges, and to brainstorm possible solutions. The science benchmark that emerged from the 2nd workshop in this series was also debated. This paper is the final report of the discussions and activities at this workshop.

  1. Characterization of General TCP Traffic under a Large Number of Flows Regime

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; La, Richard J; Makowski, Armand M

    2002-01-01

    .... Accurate traffic modeling of a large number of short-lived TCP flows is extremely difficult due to the interaction between session, transport, and network layers, and the explosion of the size...

  2. Laws of small numbers extremes and rare events

    CERN Document Server

    Falk, Michael; Reiss, Rolf-Dieter

    2011-01-01

    Since the publication of the first edition of this seminar book in 1994, the theory and applications of extremes and rare events have enjoyed an enormous and still increasing interest. The intention of the book is to give a mathematically oriented development of the theory of rare events underlying various applications. This characteristic of the book was strengthened in the second edition by incorporating various new results. In this third edition, the dramatic change of focus of extreme value theory has been taken into account: from concentrating on maxima of observations it has shifted to l

  3. Report from the 2nd Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2009-03-01

    Full Text Available The complexity and sophistication of large scale analytics in science and industry have advanced dramatically in recent years. Analysts are struggling to use complex techniques such as time series analysis and classification algorithms because their familiar, powerful tools are not scalable and cannot effectively use scalable database systems. The 2nd Extremely Large Databases (XLDB workshop was organized to understand these issues, examine their implications, and brainstorm possible solutions. The design of a new open source science database, SciDB that emerged from the first workshop in this series was also debated. This paper is the final report of the discussions and activities at this workshop.

  4. New feature for an old large number

    International Nuclear Information System (INIS)

    Novello, M.; Oliveira, L.R.A.

    1986-01-01

    A new context for the appearance of the Eddington number (10 39 ), which is due to the examination of elastic scattering of scalar particles (ΠK → ΠK) non-minimally coupled to gravity, is presented. (author) [pt

  5. Large Scale Meteorological Pattern of Extreme Rainfall in Indonesia

    Science.gov (United States)

    Kuswanto, Heri; Grotjahn, Richard; Rachmi, Arinda; Suhermi, Novri; Oktania, Erma; Wijaya, Yosep

    2014-05-01

    Extreme Weather Events (EWEs) cause negative impacts socially, economically, and environmentally. Considering these facts, forecasting EWEs is crucial work. Indonesia has been identified as being among the countries most vulnerable to the risk of natural disasters, such as floods, heat waves, and droughts. Current forecasting of extreme events in Indonesia is carried out by interpreting synoptic maps for several fields without taking into account the link between the observed events in the 'target' area with remote conditions. This situation may cause misidentification of the event leading to an inaccurate prediction. Grotjahn and Faure (2008) compute composite maps from extreme events (including heat waves and intense rainfall) to help forecasters identify such events in model output. The composite maps show large scale meteorological patterns (LSMP) that occurred during historical EWEs. Some vital information about the EWEs can be acquired from studying such maps, in addition to providing forecaster guidance. Such maps have robust mid-latitude meteorological patterns (for Sacramento and California Central Valley, USA EWEs). We study the performance of the composite approach for tropical weather condition such as Indonesia. Initially, the composite maps are developed to identify and forecast the extreme weather events in Indramayu district- West Java, the main producer of rice in Indonesia and contributes to about 60% of the national total rice production. Studying extreme weather events happening in Indramayu is important since EWEs there affect national agricultural and fisheries activities. During a recent EWE more than a thousand houses in Indramayu suffered from serious flooding with each home more than one meter underwater. The flood also destroyed a thousand hectares of rice plantings in 5 regencies. Identifying the dates of extreme events is one of the most important steps and has to be carried out carefully. An approach has been applied to identify the

  6. Report from the 5th Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Jacek Becla

    2012-03-01

    Full Text Available The 5th XLDB workshop brought together scientific and industrial users, developers, and researchers of extremely large data and focused on emerging challenges in the healthcare and genomics communities, spreadsheet-based large scale analysis, and challenges in applying statistics to large scale analysis, including machine learning. Major problems discussed were the lack of scalable applications, the lack of expertise in developing solutions, the lack of respect for or attention to big data problems, data volume growth exceeding Moore's Law, poorly scaling algorithms, and poor data quality and integration. More communication between users, developers, and researchers is sorely needed. A variety of future work to help all three groups was discussed, ranging from collecting challenge problems to connecting with particular industrial or academic sectors.

  7. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  8. Report from the 6th Workshop on Extremely Large Databases

    Directory of Open Access Journals (Sweden)

    Daniel Liwei Wang

    2013-05-01

    Full Text Available Petascale data management and analysis remain one of the main unresolved challenges in today's computing. The 6th Extremely Large Databases workshop was convened alongside the XLDB conference to discuss the challenges in the health care, biology, and natural resources communities. The role of cloud computing, the dominance of file-based solutions in science applications, in-situ and predictive analysis, and commercial software use in academic environments were discussed in depth as well. This paper summarizes the discussions of this workshop.

  9. Extreme value statistics and thermodynamics of earthquakes: large earthquakes

    Directory of Open Access Journals (Sweden)

    B. H. Lavenda

    2000-06-01

    Full Text Available A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershock sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Fréchet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions shows that self-similar power laws are transformed into nonscaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Fréchet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same Catalogue of Chinese Earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Fréchet distribution. Earthquaketemperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  10. Extreme value statistics and thermodynamics of earthquakes. Large earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lavenda, B. [Camerino Univ., Camerino, MC (Italy); Cipollone, E. [ENEA, Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). National Centre for Research on Thermodynamics

    2000-06-01

    A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershocks sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Frechet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions show that self-similar power laws are transformed into non scaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Frechet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same catalogue of Chinese earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Frechet distribution. Earthquake temperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  11. Extremely large breast abscess in a breastfeeding mother.

    Science.gov (United States)

    Martic, Krešimir; Vasilj, Oliver

    2012-11-01

    Puerperal mastitis often occurs in younger primiparous women. Most cases occur between 3 and 8 weeks postpartum. If mastitis results in the formation of a breast abscess, surgical drainage or needle aspiration is most commonly performed. We report a case of an extremely large breast abscess in a primiparous 20-year-old woman, which presented 6 weeks postpartum. Surgical incision and evacuation of 2 liters of exudate were performed, and intravenous antibiotics therapy was administered. On the sixth day after incision, we secondarily closed the wound. Examination after 3 months showed symmetrical breasts with a small scar in the incision area of the right breast. A high degree of suspicion and adequate diagnostic procedures are essential to avoid delay in the treatment of mastitis and breast abscess and thereby prevent unnecessary surgical treatment.

  12. [Parallel virtual reality visualization of extreme large medical datasets].

    Science.gov (United States)

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  13. Extremely large magnetoresistance and electronic structure of TmSb

    Science.gov (United States)

    Wang, Yi-Yan; Zhang, Hongyun; Lu, Xiao-Qin; Sun, Lin-Lin; Xu, Sheng; Lu, Zhong-Yi; Liu, Kai; Zhou, Shuyun; Xia, Tian-Long

    2018-02-01

    We report the magnetotransport properties and the electronic structure of TmSb. TmSb exhibits extremely large transverse magnetoresistance and Shubnikov-de Haas (SdH) oscillation at low temperature and high magnetic field. Interestingly, the split of Fermi surfaces induced by the nonsymmetric spin-orbit interaction has been observed from SdH oscillation. The analysis of the angle-dependent SdH oscillation illustrates the contribution of each Fermi surface to the conductivity. The electronic structure revealed by angle-resolved photoemission spectroscopy (ARPES) and first-principles calculations demonstrates a gap at the X point and the absence of band inversion. Combined with the trivial Berry phase extracted from SdH oscillation and the nearly equal concentrations of electron and hole from Hall measurements, it is suggested that TmSb is a topologically trivial semimetal and the observed XMR originates from the electron-hole compensation and high mobility.

  14. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  15. Extreme Wildlife Declines and Concurrent Increase in Livestock Numbers in Kenya: What Are the Causes?

    Directory of Open Access Journals (Sweden)

    Joseph O Ogutu

    Full Text Available There is growing evidence of escalating wildlife losses worldwide. Extreme wildlife losses have recently been documented for large parts of Africa, including western, Central and Eastern Africa. Here, we report extreme declines in wildlife and contemporaneous increase in livestock numbers in Kenya rangelands between 1977 and 2016. Our analysis uses systematic aerial monitoring survey data collected in rangelands that collectively cover 88% of Kenya's land surface. Our results show that wildlife numbers declined on average by 68% between 1977 and 2016. The magnitude of decline varied among species but was most extreme (72-88% and now severely threatens the population viability and persistence of warthog, lesser kudu, Thomson's gazelle, eland, oryx, topi, hartebeest, impala, Grevy's zebra and waterbuck in Kenya's rangelands. The declines were widespread and occurred in most of the 21 rangeland counties. Likewise to wildlife, cattle numbers decreased (25.2% but numbers of sheep and goats (76.3%, camels (13.1% and donkeys (6.7% evidently increased in the same period. As a result, livestock biomass was 8.1 times greater than that of wildlife in 2011-2013 compared to 3.5 times in 1977-1980. Most of Kenya's wildlife (ca. 30% occurred in Narok County alone. The proportion of the total "national" wildlife population found in each county increased between 1977 and 2016 substantially only in Taita Taveta and Laikipia but marginally in Garissa and Wajir counties, largely reflecting greater wildlife losses elsewhere. The declines raise very grave concerns about the future of wildlife, the effectiveness of wildlife conservation policies, strategies and practices in Kenya. Causes of the wildlife declines include exponential human population growth, increasing livestock numbers, declining rainfall and a striking rise in temperatures but the fundamental cause seems to be policy, institutional and market failures. Accordingly, we thoroughly evaluate wildlife

  16. Extreme Wildlife Declines and Concurrent Increase in Livestock Numbers in Kenya: What Are the Causes?

    Science.gov (United States)

    Ogutu, Joseph O.; Piepho, Hans-Peter; Said, Mohamed Y.; Ojwang, Gordon O.; Njino, Lucy W.; Kifugo, Shem C.; Wargute, Patrick W.

    2016-01-01

    There is growing evidence of escalating wildlife losses worldwide. Extreme wildlife losses have recently been documented for large parts of Africa, including western, Central and Eastern Africa. Here, we report extreme declines in wildlife and contemporaneous increase in livestock numbers in Kenya rangelands between 1977 and 2016. Our analysis uses systematic aerial monitoring survey data collected in rangelands that collectively cover 88% of Kenya’s land surface. Our results show that wildlife numbers declined on average by 68% between 1977 and 2016. The magnitude of decline varied among species but was most extreme (72–88%) and now severely threatens the population viability and persistence of warthog, lesser kudu, Thomson’s gazelle, eland, oryx, topi, hartebeest, impala, Grevy’s zebra and waterbuck in Kenya’s rangelands. The declines were widespread and occurred in most of the 21 rangeland counties. Likewise to wildlife, cattle numbers decreased (25.2%) but numbers of sheep and goats (76.3%), camels (13.1%) and donkeys (6.7%) evidently increased in the same period. As a result, livestock biomass was 8.1 times greater than that of wildlife in 2011–2013 compared to 3.5 times in 1977–1980. Most of Kenya’s wildlife (ca. 30%) occurred in Narok County alone. The proportion of the total “national” wildlife population found in each county increased between 1977 and 2016 substantially only in Taita Taveta and Laikipia but marginally in Garissa and Wajir counties, largely reflecting greater wildlife losses elsewhere. The declines raise very grave concerns about the future of wildlife, the effectiveness of wildlife conservation policies, strategies and practices in Kenya. Causes of the wildlife declines include exponential human population growth, increasing livestock numbers, declining rainfall and a striking rise in temperatures but the fundamental cause seems to be policy, institutional and market failures. Accordingly, we thoroughly evaluate

  17. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    Science.gov (United States)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  18. On a strong law of large numbers for monotone measures

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.

    2013-01-01

    Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf

  19. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  20. Magnetic and velocity fields in a dynamo operating at extremely small Ekman and magnetic Prandtl numbers

    Science.gov (United States)

    Šimkanin, Ján; Kyselica, Juraj

    2017-12-01

    Numerical simulations of the geodynamo are becoming more realistic because of advances in computer technology. Here, the geodynamo model is investigated numerically at the extremely low Ekman and magnetic Prandtl numbers using the PARODY dynamo code. These parameters are more realistic than those used in previous numerical studies of the geodynamo. Our model is based on the Boussinesq approximation and the temperature gradient between upper and lower boundaries is a source of convection. This study attempts to answer the question how realistic the geodynamo models are. Numerical results show that our dynamo belongs to the strong-field dynamos. The generated magnetic field is dipolar and large-scale while convection is small-scale and sheet-like flows (plumes) are preferred to a columnar convection. Scales of magnetic and velocity fields are separated, which enables hydromagnetic dynamos to maintain the magnetic field at the low magnetic Prandtl numbers. The inner core rotation rate is lower than that in previous geodynamo models. On the other hand, dimensional magnitudes of velocity and magnetic fields and those of the magnetic and viscous dissipation are larger than those expected in the Earth's core due to our parameter range chosen.

  1. Spatial extreme value analysis to project extremes of large-scale indicators for severe weather.

    Science.gov (United States)

    Gilleland, Eric; Brown, Barbara G; Ammann, Caspar M

    2013-09-01

    Concurrently high values of the maximum potential wind speed of updrafts ( W max ) and 0-6 km wind shear (Shear) have been found to represent conducive environments for severe weather, which subsequently provides a way to study severe weather in future climates. Here, we employ a model for the product of these variables (WmSh) from the National Center for Atmospheric Research/United States National Center for Environmental Prediction reanalysis over North America conditioned on their having extreme energy in the spatial field in order to project the predominant spatial patterns of WmSh. The approach is based on the Heffernan and Tawn conditional extreme value model. Results suggest that this technique estimates the spatial behavior of WmSh well, which allows for exploring possible changes in the patterns over time. While the model enables a method for inferring the uncertainty in the patterns, such analysis is difficult with the currently available inference approach. A variation of the method is also explored to investigate how this type of model might be used to qualitatively understand how the spatial patterns of WmSh correspond to extreme river flow events. A case study for river flows from three rivers in northwestern Tennessee is studied, and it is found that advection of WmSh from the Gulf of Mexico prevails while elsewhere, WmSh is generally very low during such extreme events. © 2013 The Authors. Environmetrics published by JohnWiley & Sons, Ltd.

  2. The large numbers hypothesis and a relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Lau, Y.K.; Prokhovnik, S.J.

    1986-01-01

    A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated

  3. Fatal crashes involving large numbers of vehicles and weather.

    Science.gov (United States)

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  4. Fake Superpotential for Large and Small Extremal Black Holes

    CERN Document Server

    Andrianopoli, L; Ferrara, S; Trigiante, M

    2010-01-01

    We consider the fist order, gradient-flow, description of the scalar fields coupled to spherically symmetric, asymptotically flat black holes in extended supergravities. Using the identification of the fake superpotential with Hamilton's characteristic function we clarify some of its general properties, showing in particular (besides reviewing the issue of its duality invariance) that W has the properties of a Liapunov's function, which implies that its extrema (associated with the horizon of extremal black holes) are asymptotically stable equilibrium points of the corresponding first order dynamical system (in the sense of Liapunov). Moreover, we show that the fake superpotential W has, along the entire radial flow, the same flat directions which exist at the attractor point. This allows to study properties of the ADM mass also for small black holes where in fact W has no critical points at finite distance in moduli space. In particular the W function for small non-BPS black holes can always be computed anal...

  5. Using GRACE Satellite Gravimetry for Assessing Large-Scale Hydrologic Extremes

    Directory of Open Access Journals (Sweden)

    Alexander Y. Sun

    2017-12-01

    Full Text Available Global assessment of the spatiotemporal variability in terrestrial total water storage anomalies (TWSA in response to hydrologic extremes is critical for water resources management. Using TWSA derived from the gravity recovery and climate experiment (GRACE satellites, this study systematically assessed the skill of the TWSA-climatology (TC approach and breakpoint (BP detection method for identifying large-scale hydrologic extremes. The TC approach calculates standardized anomalies by using the mean and standard deviation of the GRACE TWSA corresponding to each month. In the BP detection method, the empirical mode decomposition (EMD is first applied to identify the mean return period of TWSA extremes, and then a statistical procedure is used to identify the actual occurrence times of abrupt changes (i.e., BPs in TWSA. Both detection methods were demonstrated on basin-averaged TWSA time series for the world’s 35 largest river basins. A nonlinear event coincidence analysis measure was applied to cross-examine abrupt changes detected by these methods with those detected by the Standardized Precipitation Index (SPI. Results show that our EMD-assisted BP procedure is a promising tool for identifying hydrologic extremes using GRACE TWSA data. Abrupt changes detected by the BP method coincide well with those of the SPI anomalies and with documented hydrologic extreme events. Event timings obtained by the TC method were ambiguous for a number of river basins studied, probably because the GRACE data length is too short to derive long-term climatology at this time. The BP approach demonstrates a robust wet-dry anomaly detection capability, which will be important for applications with the upcoming GRACE Follow-On mission.

  6. On Independence for Capacities with Law of Large Numbers

    OpenAIRE

    Huang, Weihuan

    2017-01-01

    This paper introduces new notions of Fubini independence and Exponential independence of random variables under capacities to fit Ellsberg's model, and finds out the relationships between Fubini independence, Exponential independence, MacCheroni and Marinacci's independence and Peng's independence. As an application, we give a weak law of large numbers for capacities under Exponential independence.

  7. Teaching Multiplication of Large Positive Whole Numbers Using ...

    African Journals Online (AJOL)

    This study investigated the teaching of multiplication of large positive whole numbers using the grating method and the effect of this method on students' performance in junior secondary schools. The study was conducted in Obio Akpor Local Government Area of Rivers state. It was quasi- experimental. Two research ...

  8. Lovelock inflation and the number of large dimensions

    CERN Document Server

    Ferrer, Francesc

    2007-01-01

    We discuss an inflationary scenario based on Lovelock terms. These higher order curvature terms can lead to inflation when there are more than three spatial dimensions. Inflation will end if the extra dimensions are stabilised, so that at most three dimensions are free to expand. This relates graceful exit to the number of large dimensions.

  9. Identification of large-scale meteorological patterns associated with extreme precipitation in the US northeast

    Science.gov (United States)

    Agel, Laurie; Barlow, Mathew; Feldstein, Steven B.; Gutowski, William J.

    2018-03-01

    Patterns of daily large-scale circulation associated with Northeast US extreme precipitation are identified using both k-means clustering (KMC) and Self-Organizing Maps (SOM) applied to tropopause height. The tropopause height provides a compact representation of the upper-tropospheric potential vorticity, which is closely related to the overall evolution and intensity of weather systems. Extreme precipitation is defined as the top 1% of daily wet-day observations at 35 Northeast stations, 1979-2008. KMC is applied on extreme precipitation days only, while the SOM algorithm is applied to all days in order to place the extreme results into the overall context of patterns for all days. Six tropopause patterns are identified through KMC for extreme day precipitation: a summertime tropopause ridge, a summertime shallow trough/ridge, a summertime shallow eastern US trough, a deeper wintertime eastern US trough, and two versions of a deep cold-weather trough located across the east-central US. Thirty SOM patterns for all days are identified. Results for all days show that 6 SOM patterns account for almost half of the extreme days, although extreme precipitation occurs in all SOM patterns. The same SOM patterns associated with extreme precipitation also routinely produce non-extreme precipitation; however, on extreme precipitation days the troughs, on average, are deeper and the downstream ridges more pronounced. Analysis of other fields associated with the large-scale patterns show various degrees of anomalously strong moisture transport preceding, and upward motion during, extreme precipitation events.

  10. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  11. Fluid Mechanics of Aquatic Locomotion at Large Reynolds Numbers

    OpenAIRE

    Govardhan, RN; Arakeri, JH

    2011-01-01

    Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body...

  12. Rotating thermal convection at very large Rayleigh numbers

    Science.gov (United States)

    Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard

    2016-11-01

    The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.

  13. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    International Nuclear Information System (INIS)

    Ruebel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat

    2008-01-01

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system

  14. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  15. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  16. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  17. The large number hypothesis and Einstein's theory of gravitation

    International Nuclear Information System (INIS)

    Yun-Kau Lau

    1985-01-01

    In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch

  18. Combining large number of weak biomarkers based on AUC.

    Science.gov (United States)

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Extreme value prediction of the wave-induced vertical bending moment in large container ships

    DEFF Research Database (Denmark)

    Andersen, Ingrid Marie Vincent; Jensen, Jørgen Juncher

    2015-01-01

    increase the extreme hull girder response significantly. Focus in the present paper is on the influence of the hull girder flexibility on the extreme response amidships, namely the wave-induced vertical bending moment (VBM) in hogging, and the prediction of the extreme value of the same. The analysis...... in the present paper is based on time series of full scale measurements from three large container ships of 8600, 9400 and 14000 TEU. When carrying out the extreme value estimation the peak-over-threshold (POT) method combined with an appropriate extreme value distribution is applied. The choice of a proper...... threshold level as well as the statistical correlation between clustered peaks influence the extreme value prediction and are taken into consideration in the present paper....

  20. Quasi-isodynamic configuration with large number of periods

    International Nuclear Information System (INIS)

    Shafranov, V.D.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.; Cooper, W.A.; Kalyuzhnyj, V.N.; Kasilov, S.V.; Nemov, V.V.; Kernbichler, W.; Nuehrenberg, C.; Nuehrenberg, J.; Zille, R.

    2005-01-01

    It has been previously reported that quasi-isodynamic (qi) stellarators with poloidal direction of the contours of B on magnetic surface can exhibit very good fast- particle collisionless confinement. In addition, approaching the quasi-isodynamicity condition leads to diminished neoclassical transport and small bootstrap current. The calculations of local-mode stability show that there is a tendency toward an increasing beta limit with increasing number of periods. The consideration of the quasi-helically symmetric systems has demonstrated that with increasing aspect ratio (and number of periods) the optimized configuration approaches the straight symmetric counterpart, for which the optimal parameters and highest beta values were found by optimization of the boundary magnetic surface cross-section. The qi system considered here with zero net toroidal current do not have a symmetric analogue in the limit of large aspect ratio and finite rotational transform. Thus, it is not clear whether some invariant structure of the configuration period exists in the limit of negligible toroidal effect and what are the best possible parameters for it. In the present paper the results of an optimization of the configuration with N = 12 number of periods are presented. Such properties as fast-particle confinement, effective ripple, structural factor of bootstrap current and MHD stability are considered. It is shown that MHD stability limit here is larger than in configurations with smaller number of periods considered earlier. Nevertheless, the toroidal effect in this configuration is still significant so that a simple increase of the number of periods and proportional growth of aspect ratio do not conserve favourable neoclassical transport and ideal local-mode stability properties. (author)

  1. Automatic trajectory measurement of large numbers of crowded objects

    Science.gov (United States)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  2. Extremal values on Zagreb indices of trees with given distance k-domination number.

    Science.gov (United States)

    Pei, Lidan; Pan, Xiangfeng

    2018-01-01

    Let [Formula: see text] be a graph. A set [Formula: see text] is a distance k -dominating set of G if for every vertex [Formula: see text], [Formula: see text] for some vertex [Formula: see text], where k is a positive integer. The distance k -domination number [Formula: see text] of G is the minimum cardinality among all distance k -dominating sets of G . The first Zagreb index of G is defined as [Formula: see text] and the second Zagreb index of G is [Formula: see text]. In this paper, we obtain the upper bounds for the Zagreb indices of n -vertex trees with given distance k -domination number and characterize the extremal trees, which generalize the results of Borovićanin and Furtula (Appl. Math. Comput. 276:208-218, 2016). What is worth mentioning, for an n -vertex tree T , is that a sharp upper bound on the distance k -domination number [Formula: see text] is determined.

  3. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  4. The large numbers hypothesis and the Einstein theory of gravitation

    International Nuclear Information System (INIS)

    Dirac, P.A.M.

    1979-01-01

    A study of the relations between large dimensionless numbers leads to the belief that G, expressed in atomic units, varies with the epoch while the Einstein theory requires G to be constant. These two requirements can be reconciled by supposing that the Einstein theory applies with a metric that differs from the atomic metric. The theory can be developed with conservation of mass by supposing that the continual increase in the mass of the observable universe arises from a continual slowing down of the velocity of recession of the galaxies. This leads to a model of the Universe that was first proposed by Einstein and de Sitter (the E.S. model). The observations of the microwave radiation fit in with this model. The static Schwarzchild metric has to be modified to fit in with the E.S. model for large r. The modification is worked out, and also the motion of planets with the new metric. It is found that there is a difference between ephemeris time and atomic time, and also that there should be an inward spiralling of the planets, referred to atomic units, superposed on the motion given by ordinary gravitational theory. These are effects that can be checked by observation, but there is no conclusive evidence up to the present. (author)

  5. A Characterization of Hypergraphs with Large Domination Number

    Directory of Open Access Journals (Sweden)

    Henning Michael A.

    2016-05-01

    Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.

  6. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  7. Particle creation and Dirac's large number hypothesis; and Reply

    International Nuclear Information System (INIS)

    Canuto, V.; Adams, P.J.; Hsieh, S.H.; Tsiang, E.; Steigman, G.

    1976-01-01

    The claim made by Steigman (Nature; 261:479 (1976)), that the creation of matter as postulated by Dirac (Proc. R. Soc.; A338:439 (1974)) is unnecessary, is here shown to be incorrect. It is stated that Steigman's claim that Dirac's large Number Hypothesis (LNH) does not require particle creation is wrong because he has assumed that which he was seeking to prove, that is that rho does not contain matter creation. Steigman's claim that Dirac's LNH leads to nonsensical results in the very early Universe is superficially correct, but this only supports Dirac's contention that the LNH may not be valid in the very early Universe. In a reply Steigman points out that in Dirac's original cosmology R approximately tsup(1/3) and using this model the results and conclusions of the present author's paper do apply but using a variation chosen by Canuto et al (T approximately t) Dirac's LNH cannot apply. Additionally it is observed that a cosmological theory which only predicts the present epoch is of questionable value. (U.K.)

  8. A modified large number theory with constant G

    Science.gov (United States)

    Recami, Erasmo

    1983-03-01

    The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.

  9. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    Science.gov (United States)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  10. Extreme temperatures and out-of-hospital coronary deaths in six large Chinese cities.

    Science.gov (United States)

    Chen, Renjie; Li, Tiantian; Cai, Jing; Yan, Meilin; Zhao, Zhuohui; Kan, Haidong

    2014-12-01

    The seasonal trend of out-of-hospital coronary death (OHCD) and sudden cardiac death has been observed, but whether extreme temperature serves as a risk factor is rarely investigated. We therefore aimed to evaluate the impact of extreme temperatures on OHCDs in China. We obtained death records of 126,925 OHCDs from six large Chinese cities (Harbin, Beijing, Tianjin, Nanjing, Shanghai and Guangzhou) during the period 2009-2011. The short-term associations between extreme temperature and OHCDs were analysed with time-series methods in each city, using generalised additive Poisson regression models. We specified distributed lag non-linear models in studying the delayed effects of extreme temperature. We then applied Bayesian hierarchical models to combine the city-specific effect estimates. The associations between extreme temperature and OHCDs were almost U-shaped or J-shaped. The pooled relative risks (RRs) of extreme cold temperatures over the lags 0-14 days comparing the 1st and 25th centile temperatures were 1.49 (95% posterior interval (PI) 1.26-1.76); the pooled RRs of extreme hot temperatures comparing the 99th and 75th centile temperatures were 1.53 (95% PI 1.27-1.84) for OHCDs. The RRs of extreme temperature on OHCD were higher if the patients with coronary heart disease were old, male and less educated. This multicity epidemiological study suggested that both extreme cold and hot temperatures posed significant risks on OHCDs, and might have important public health implications for the prevention of OHCD or sudden cardiac death. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. The large lungs of elite swimmers: an increased alveolar number?

    Science.gov (United States)

    Armour, J; Donnelly, P M; Bye, P T

    1993-02-01

    In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or

  12. A large, benign prostatic cyst presented with an extremely high serum prostate-specific antigen level.

    Science.gov (United States)

    Chen, Han-Kuang; Pemberton, Richard

    2016-01-08

    We report a case of a patient who presented with an extremely high serum prostate specific antigen (PSA) level and underwent radical prostatectomy for presumed prostate cancer. Surprisingly, the whole mount prostatectomy specimen showed only small volume, organ-confined prostate adenocarcinoma and a large, benign intraprostatic cyst, which was thought to be responsible for the PSA elevation. 2016 BMJ Publishing Group Ltd.

  13. MAD about the Large Magellanic Cloud Preparing for the era of Extremely Large Telescopes

    NARCIS (Netherlands)

    Fiorentino, G.; Tolstoy, E.; Diolaiti, E.; Valenti, E.; Cignoni, M.; Mackey, A. D.

    We present J, H, K-s photometry from the the Multi conjugate Adaptive optics Demonstrator (MAD), a visitor instrument at the VLT, of a resolved stellar population in a small crowded field in the bar of the Large Magellanic Cloud near the globular cluster NGC 1928. In a total exposure time of 6, 36

  14. A NICE approach to managing large numbers of desktop PC's

    International Nuclear Information System (INIS)

    Foster, David

    1996-01-01

    The problems of managing desktop systems are far from resolved. As we deploy increasing numbers of systems, PC's Mackintoshes and UN*X Workstations. This paper will concentrate on the solution adopted at CERN for the management of the rapidly increasing numbers of desktop PC's in use in all parts of the laboratory. (author)

  15. The Ramsey numbers of large cycles versus small wheels

    NARCIS (Netherlands)

    Surahmat,; Baskoro, E.T.; Broersma, H.J.

    2004-01-01

    For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4

  16. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    International Nuclear Information System (INIS)

    Yu, Lejiang; Zhong, Shiyuan; Pei, Lisi; Bian, Xindi; Heilman, Warren E

    2016-01-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for severe flooding over a large region, little is known about how extreme precipitation events that cause flash flooding and occur at sub-daily time scales have changed over time. Here we use the observed hourly precipitation from the North American Land Data Assimilation System Phase 2 forcing datasets to determine trends in the frequency of extreme precipitation events of short (1 h, 3 h, 6 h, 12 h and 24 h) duration for the period 1979–2013. The results indicate an increasing trend in the central and eastern US. Over most of the western US, especially the Southwest and the Intermountain West, the trends are generally negative. These trends can be largely explained by the interdecadal variability of the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation (AMO), with the AMO making a greater contribution to the trends in both warm and cold seasons. (letter)

  17. An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes

    Directory of Open Access Journals (Sweden)

    Eduardo Magdaleno

    2009-12-01

    Full Text Available In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain: international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975. It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA. These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO problems in Extremely Large Telescopes (ELTs in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs. Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations.

  18. United States Temperature and Precipitation Extremes: Phenomenology, Large-Scale Organization, Physical Mechanisms and Model Representation

    Science.gov (United States)

    Black, R. X.

    2017-12-01

    We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.

  19. Large-solid-angle illuminators for extreme ultraviolet lithography with laser plasmas

    International Nuclear Information System (INIS)

    Kubiak, G.D.; Tichenor, D.A.; Sweatt, W.C.; Chow, W.W.

    1995-06-01

    Laser Plasma Sources (LPSS) of extreme ultraviolet radiation are an attractive alternative to synchrotron radiation sources for extreme ultraviolet lithography (EUVL) due to their modularity, brightness, and modest size and cost. To fully exploit the extreme ultraviolet power emitted by such sources, it is necessary to capture the largest possible fraction of the source emission half-sphere while simultaneously optimizing the illumination stationarity and uniformity on the object mask. In this LDRD project, laser plasma source illumination systems for EUVL have been designed and then theoretically and experimentally characterized. Ellipsoidal condensers have been found to be simple yet extremely efficient condensers for small-field EUVL imaging systems. The effects of aberrations in such condensers on extreme ultraviolet (EUV) imaging have been studied with physical optics modeling. Lastly, the design of an efficient large-solid-angle condenser has been completed. It collects 50% of the available laser plasma source power at 14 nm and delivers it properly to the object mask in a wide-arc-field camera

  20. Turbulent flows at very large Reynolds numbers: new lessons learned

    International Nuclear Information System (INIS)

    Barenblatt, G I; Prostokishin, V M; Chorin, A J

    2014-01-01

    The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is based on an assumption that cannot be considered to be correct and which does not correspond to experiment. Nor is Landau's derivation of this law quite correct. In this paper, an alternative scaling law explicitly incorporating the influence of the Reynolds number is discussed, as is the corresponding drag law. The study uses the concept of intermediate asymptotics and that of incomplete similarity in the similarity parameter. Yakov Borisovich Zeldovich played an outstanding role in the development of these ideas. This work is a tribute to his glowing memory. (100th anniversary of the birth of ya b zeldovich)

  1. Chaotic scattering: the supersymmetry method for large number of channels

    International Nuclear Information System (INIS)

    Lehmann, N.; Saher, D.; Sokolov, V.V.; Sommers, H.J.

    1995-01-01

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap Γ g which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  2. Chaotic scattering: the supersymmetry method for large number of channels

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, N. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Saher, D. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sokolov, V.V. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sommers, H.J. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik)

    1995-01-23

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap [Gamma][sub g] which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  3. Gentile statistics with a large maximum occupation number

    International Nuclear Information System (INIS)

    Dai Wusheng; Xie Mi

    2004-01-01

    In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics

  4. Extremely large and significantly anisotropic magnetoresistance in ZrSiS single crystals

    Energy Technology Data Exchange (ETDEWEB)

    Lv, Yang-Yang; Zhang, Bin-Bin; Yao, Shu-Hua, E-mail: shyao@nju.edu.cn, E-mail: ybchen@nju.edu.cn, E-mail: zhoujian@nju.edu.cn; Zhou, Jian, E-mail: shyao@nju.edu.cn, E-mail: ybchen@nju.edu.cn, E-mail: zhoujian@nju.edu.cn; Zhang, Shan-Tao; Lu, Ming-Hui [National Laboratory of Solid State Microstructures and Department of Materials Science and Engineering, Nanjing University, Nanjing 210093 (China); Li, Xiao; Chen, Y. B., E-mail: shyao@nju.edu.cn, E-mail: ybchen@nju.edu.cn, E-mail: zhoujian@nju.edu.cn [National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093 (China); Chen, Yan-Feng [National Laboratory of Solid State Microstructures and Department of Materials Science and Engineering, Nanjing University, Nanjing 210093 (China); Collaborative Innovation Center of Advanced Microstructure, Nanjing University, Nanjing 210093 (China)

    2016-06-13

    Recently, the extremely large magnetoresistance (MR) observed in transition metal telluride, like WTe{sub 2}, attracted much attention because of the potential applications in magnetic sensor. Here, we report the observation of extremely large magnetoresistance as 3.0 × 10{sup 4}% measured at 2 K and 9 T magnetic field aligned along [001]-ZrSiS. The significant magnetoresistance change (∼1.4 × 10{sup 4}%) can be obtained when the magnetic field is titled from [001] to [011]-ZrSiS. These abnormal magnetoresistance behaviors in ZrSiS can be understood by electron-hole compensation and the open orbital of Fermi surface. Because of these superior MR properties, ZrSiS may be used in the magnetic sensors.

  5. Drastic Pressure Effect on the Extremely Large Magnetoresistance in WTe2: Quantum Oscillation Study.

    Science.gov (United States)

    Cai, P L; Hu, J; He, L P; Pan, J; Hong, X C; Zhang, Z; Zhang, J; Wei, J; Mao, Z Q; Li, S Y

    2015-07-31

    The quantum oscillations of the magnetoresistance under ambient and high pressure have been studied for WTe2 single crystals, in which extremely large magnetoresistance was discovered recently. By analyzing the Shubnikov-de Haas oscillations, four Fermi surfaces are identified, and two of them are found to persist to high pressure. The sizes of these two pockets are comparable, but show increasing difference with pressure. At 0.3 K and in 14.5 T, the magnetoresistance decreases drastically from 1.25×10(5)% under ambient pressure to 7.47×10(3)% under 23.6 kbar, which is likely caused by the relative change of Fermi surfaces. These results support the scenario that the perfect balance between the electron and hole populations is the origin of the extremely large magnetoresistance in WTe2.

  6. Changes in daily climate extremes in China and their connection to the large scale atmospheric circulation during 1961-2003

    Energy Technology Data Exchange (ETDEWEB)

    You, Qinglong [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China); Friedrich-Schiller University Jena, Department of Geoinformatics, Jena (Germany); Graduate University of Chinese Academy of Sciences, Beijing (China); Kang, Shichang [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China); State Key Laboratory of Cryospheric Science, Chinese Academy of Sciences, Lanzhou (China); Aguilar, Enric [Universitat Rovirai Virgili de Tarragona, Climate Change Research Group, Geography Unit, Tarragona (Spain); Pepin, Nick [University of Portsmouth, Department of Geography, Portsmouth (United Kingdom); Fluegel, Wolfgang-Albert [Friedrich-Schiller University Jena, Department of Geoinformatics, Jena (Germany); Yan, Yuping [National Climate Center, Beijing (China); Xu, Yanwei; Huang, Jie [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China); Graduate University of Chinese Academy of Sciences, Beijing (China); Zhang, Yongjun [Institute of Tibetan Plateau Research, Chinese Academy of Sciences (CAS), Laboratory of Tibetan Environment Changes and Land Surface Processes, Beijing (China)

    2011-06-15

    Based on daily maximum and minimum surface air temperature and precipitation records at 303 meteorological stations in China, the spatial and temporal distributions of indices of climate extremes are analyzed during 1961-2003. Twelve indices of extreme temperature and six of extreme precipitation are studied. Temperature extremes have high correlations with the annual mean temperature, which shows a significant warming of 0.27 C/decade, indicating that changes in temperature extremes reflect the consistent warming. Stations in northeastern, northern, northwestern China have larger trend magnitudes, which are accordance with the more rapid mean warming in these regions. Countrywide, the mean trends for cold days and cold nights have decreased by -0.47 and -2.06 days/decade respectively, and warm days and warm nights have increased by 0.62 and 1.75 days/decade, respectively. Over the same period, the number of frost days shows a statistically significant decreasing trend of -3.37 days/decade. The length of the growing season and the number of summer days exhibit significant increasing trends at rates of 3.04 and 1.18 days/decade, respectively. The diurnal temperature range has decreased by -0.18 C/decade. Both the annual extreme lowest and highest temperatures exhibit significant warming trends, the former warming faster than the latter. For precipitation indices, regional annual total precipitation shows an increasing trend and most other precipitation indices are strongly correlated with annual total precipitation. Average wet day precipitation, maximum 1-day and 5-day precipitation, and heavy precipitation days show increasing trends, but only the last is statistically significant. A decreasing trend is found for consecutive dry days. For all precipitation indices, stations in the Yangtze River basin, southeastern and northwestern China have the largest positive trend magnitudes, while stations in the Yellow River basin and in northern China have the largest

  7. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until

  8. How do the multiple large-scale climate oscillations trigger extreme precipitation?

    Science.gov (United States)

    Shi, Pengfei; Yang, Tao; Xu, Chong-Yu; Yong, Bin; Shao, Quanxi; Li, Zhenya; Wang, Xiaoyan; Zhou, Xudong; Li, Shu

    2017-10-01

    Identifying the links between variations in large-scale climate patterns and precipitation is of tremendous assistance in characterizing surplus or deficit of precipitation, which is especially important for evaluation of local water resources and ecosystems in semi-humid and semi-arid regions. Restricted by current limited knowledge on underlying mechanisms, statistical correlation methods are often used rather than physical based model to characterize the connections. Nevertheless, available correlation methods are generally unable to reveal the interactions among a wide range of climate oscillations and associated effects on precipitation, especially on extreme precipitation. In this work, a probabilistic analysis approach by means of a state-of-the-art Copula-based joint probability distribution is developed to characterize the aggregated behaviors for large-scale climate patterns and their connections to precipitation. This method is employed to identify the complex connections between climate patterns (Atlantic Multidecadal Oscillation (AMO), El Niño-Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO)) and seasonal precipitation over a typical semi-humid and semi-arid region, the Haihe River Basin in China. Results show that the interactions among multiple climate oscillations are non-uniform in most seasons and phases. Certain joint extreme phases can significantly trigger extreme precipitation (flood and drought) owing to the amplification effect among climate oscillations.

  9. Wind and wave extremes over the world oceans from very large ensembles

    Science.gov (United States)

    Breivik, Øyvind; Aarnes, Ole Johan; Abdalla, Saleh; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.

    2014-07-01

    Global return values of marine wind speed and significant wave height are estimated from very large aggregates of archived ensemble forecasts at +240 h lead time. Long lead time ensures that the forecasts represent independent draws from the model climate. Compared with ERA-Interim, a reanalysis, the ensemble yields higher return estimates for both wind speed and significant wave height. Confidence intervals are much tighter due to the large size of the data set. The period (9 years) is short enough to be considered stationary even with climate change. Furthermore, the ensemble is large enough for nonparametric 100 year return estimates to be made from order statistics. These direct return estimates compare well with extreme value estimates outside areas with tropical cyclones. Like any method employing modeled fields, it is sensitive to tail biases in the numerical model, but we find that the biases are moderate outside areas with tropical cyclones.

  10. Coupled large-eddy simulation and morphodynamics of a large-scale river under extreme flood conditions

    Science.gov (United States)

    Khosronejad, Ali; Sotiropoulos, Fotis; Stony Brook University Team

    2016-11-01

    We present a coupled flow and morphodynamic simulations of extreme flooding in 3 km long and 300 m wide reach of the Mississippi River in Minnesota, which includes three islands and hydraulic structures. We employ the large-eddy simulation (LES) and bed-morphodynamic modules of the VFS-Geophysics model to investigate the flow and bed evolution of the river during a 500 year flood. The coupling of the two modules is carried out via a fluid-structure interaction approach using a nested domain approach to enhance the resolution of bridge scour predictions. The geometrical data of the river, islands and structures are obtained from LiDAR, sub-aqueous sonar and in-situ surveying to construct a digital map of the river bathymetry. Our simulation results for the bed evolution of the river reveal complex sediment dynamics near the hydraulic structures. The numerically captured scour depth near some of the structures reach a maximum of about 10 m. The data-driven simulation strategy we present in this work exemplifies a practical simulation-based-engineering-approach to investigate the resilience of infrastructures to extreme flood events in intricate field-scale riverine systems. This work was funded by a Grant from Minnesota Dept. of Transportation.

  11. The Accident. Parenting Styles: Avoiding the Extremes. Student Guide--Footsteps. Report Number 13.

    Science.gov (United States)

    Barry, Sharon; And Others

    This student guide for a program on styles in parenting discusses how attitudes toward childrearing have changed over the past 50 years, how children are affected by some extreme approaches to childrearing, and how a parenting style that is neither overpermissive nor overprotective is most likely to enhance children's growth. Designed around a…

  12. Number of Black Children in Extreme Poverty Hits Record High. Analysis Background.

    Science.gov (United States)

    Children's Defense Fund, Washington, DC.

    To examine the experiences of black children and poverty, researchers conducted a computer analysis of data from the U.S. Census Bureau's Current Population Survey, the source of official government poverty statistics. The data are through 2001. Results indicated that nearly 1 million black children were living in extreme poverty, with after-tax…

  13. An extremely bright gamma-ray pulsar in the Large Magellanic Cloud.

    Science.gov (United States)

    2015-11-13

    Pulsars are rapidly spinning, highly magnetized neutron stars, created in the gravitational collapse of massive stars. We report the detection of pulsed giga-electron volt gamma rays from the young pulsar PSR J0540-6919 in the Large Magellanic Cloud, a satellite galaxy of the Milky Way. This is the first gamma-ray pulsar detected in another galaxy. It has the most luminous pulsed gamma-ray emission yet observed, exceeding the Crab pulsar's by a factor of 20. PSR J0540-6919 presents an extreme test case for understanding the structure and evolution of neutron star magnetospheres. Copyright © 2015, American Association for the Advancement of Science.

  14. Electric-field-induced extremely large change in resistance in graphene ferromagnets

    Science.gov (United States)

    Song, Yu

    2018-01-01

    A colossal magnetoresistance (˜100×10^3% ) and an extremely large magnetoresistance (˜1×10^6% ) have been previously explored in manganite perovskites and Dirac materials, respectively. However, the requirement of an extremely strong magnetic field (and an extremely low temperature) makes them not applicable for realistic devices. In this work, we propose a device that can generate even larger changes in resistance in a zero-magnetic field and at a high temperature. The device is composed of graphene under two strips of yttrium iron garnet (YIG), where two gate voltages are applied to cancel the heavy charge doping in the YIG-induced half-metallic ferromagnets. By calculations using the Landauer-Büttiker formalism, we demonstrate that, when a proper gate voltage is applied on the free ferromagnet, changes in resistance up to 305×10^6% (16×10^3% ) can be achieved at the liquid helium (nitrogen) temperature and in a zero magnetic field. We attribute such a remarkable effect to a gate-induced full-polarization reversal in the free ferromagnet, which results in a metal-state to insulator-state transition in the device. We also find that the proposed effect can be realized in devices using other magnetic insulators, such as EuO and EuS. Our work should be helpful for developing a realistic switching device that is energy saving and CMOS-technology compatible.

  15. Contribution of large-scale midlatitude disturbances to hourly precipitation extremes in the United States

    Science.gov (United States)

    Barbero, Renaud; Abatzoglou, John T.; Fowler, Hayley J.

    2018-02-01

    Midlatitude synoptic weather regimes account for a substantial portion of annual precipitation accumulation as well as multi-day precipitation extremes across parts of the United States (US). However, little attention has been devoted to understanding how synoptic-scale patterns contribute to hourly precipitation extremes. A majority of 1-h annual maximum precipitation (AMP) across the western US were found to be linked to two coherent midlatitude synoptic patterns: disturbances propagating along the jet stream, and cutoff upper-level lows. The influence of these two patterns on 1-h AMP varies geographically. Over 95% of 1-h AMP along the western coastal US were coincident with progressive midlatitude waves embedded within the jet stream, while over 30% of 1-h AMP across the interior western US were coincident with cutoff lows. Between 30-60% of 1-h AMP were coincident with the jet stream across the Ohio River Valley and southeastern US, whereas a a majority of 1-h AMP over the rest of central and eastern US were not found to be associated with either midlatitude synoptic features. Composite analyses for 1-h AMP days coincident to cutoff lows and jet stream show that an anomalous moisture flux and upper-level dynamics are responsible for initiating instability and setting up an environment conducive to 1-h AMP events. While hourly precipitation extremes are generally thought to be purely convective in nature, this study shows that large-scale dynamics and baroclinic disturbances may also contribute to precipitation extremes on sub-daily timescales.

  16. Development of an Evaluation Methodology for Loss of Large Area induced from extreme events

    International Nuclear Information System (INIS)

    Kim, Sok Chul; Park, Jong Seuk; Kim, Byung Soon; Jang, Dong Ju; Lee, Seung Woo

    2015-01-01

    USNRC announced several regulatory requirements and guidance documents regarding the event of loss of large area including 10CFR 50.54(hh), Regulatory Guide 1.214 and SRP 19.4. In Korea, consideration of loss of large area has been limitedly taken into account for newly constructing NPPs as voluntary based. In general, it is hardly possible to find available information on methodology and key assumptions for the assessment of LOLA due to 'need to know based approach'. Urgent needs exists for developing country specific regulatory requirements, guidance and evaluation methodology by themselves with the consideration of their own geographical and nuclear safety and security environments. Currently, Korea Hydro and Nuclear Power Company (KHNP) has developed an Extended Damage Mitigation Guideline (EDMG) for APR1400 under contract with foreign consulting company. The submittal guidance NEI 06-12 related to B.5.b Phase 2 and 3 focused on unit-wise mitigation strategy instead of site level mitigation or response strategy. Phase 1 mitigating strategy and guideline for LOLA (Loss of Large Area) provides emphasis on site level arrangement including cooperative networking outside organizations and agile command and control system. Korea Institute of Nuclear Safety has carried out a pilot in-house research project to develop the methodology and guideline for evaluation of LOLA since 2014. This paper introduces the summary of major results and outcomes of the aforementioned research project. After Fukushima Dai-Ichi accident, the awareness on countering the event of loss of large area induced from extreme man-made hazards or extreme beyond design basis external event. Urgent need exists to develop regulatory guidance for coping with this undesirable situation, which has been out of consideration at existing nuclear safety regulatory framework due to the expectation of rare possibility of occurrence

  17. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    Science.gov (United States)

    Lejiang Yu; Shiyuan Zhong; Lisi Pei; Xindi (Randy) Bian; Warren E. Heilman

    2016-01-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for...

  18. Transumbilical single-site laparoscopy takes the advantage of ultraminilaparotomy in managing an extremely large ovarian cyst

    Directory of Open Access Journals (Sweden)

    Hsuan Su

    2012-11-01

    Conclusion: This application not only provides both advantages of ultraminilaparotomy and laparoscopy but it also overcomes the limitations of both approaches. Therefore, it is the surgical approach of choice for a patient bearing an extremely large ovarian cystic tumor.

  19. Neutron Star Astronomy in the era of the European Extremely Large Telescope

    International Nuclear Information System (INIS)

    Mignani, Roberto P.

    2011-01-01

    About 25 isolated neutron stars (INSs) are now detected in the optical domain, mainly thanks to the HST and to VLT-class telescopes. The European Extremely Large Telescope(E-ELT) will yield ∼100 new identifications, many of which from the follow-up of SKA, IXO, and Fermi observations. Moreover, the E-ELT will allow to carry out, on a much larger sample, INS observations which still challenge VLT-class telescopes, enabling studies on the structure and composition of the NS interior, of its atmosphere and magnetosphere, as well as to search for debris discs. In this contribution, I outline future perspectives for NS optical astronomy with the E-ELT.

  20. Extreme Temperature Regimes during the Cool Season and their Associated Large-Scale Circulations

    Science.gov (United States)

    Xie, Z.

    2015-12-01

    In the cool season (November-March), extreme temperature events (ETEs) always hit the continental United States (US) and provide significant societal impacts. According to the anomalous amplitudes of the surface air temperature (SAT), there are two typical types of ETEs, e.g. cold waves (CWs) and warm waves (WWs). This study used cluster analysis to categorize both CWs and WWs into four distinct regimes respectively and investigated their associated large-scale circulations on intra-seasonal time scale. Most of the CW regimes have large areal impact over the continental US. However, the distribution of cold SAT anomalies varies apparently in four regimes. In the sea level, the four CW regimes are characterized by anomalous high pressure over North America (near and to west of cold anomaly) with different extension and orientation. As a result, anomalous northerlies along east flank of anomalous high pressure convey cold air into the continental US. To the middle troposphere, the leading two groups feature large-scale and zonally-elongated circulation anomaly pattern, while the other two regimes exhibit synoptic wavetrain pattern with meridionally elongated features. As for the WW regimes, there are some patterns symmetry and anti-symmetry with respect to CW regimes. The WW regimes are characterized by anomalous low pressure and southerlies wind over North America. The first and fourth groups are affected by remote forcing emanating from North Pacific, while the others appear mainly locally forced.

  1. Influence of a Large Pillar on the Optimum Roadway Position in an Extremely Close Coal Seam

    Directory of Open Access Journals (Sweden)

    Li Yang

    2016-01-01

    Full Text Available Based on the mining practice in an extremely close coal seam, theoretical analysis was conducted on the vertical stress distribution of the floor strata under a large coal pillar. The vertical stress distribution regulation of a No. 5 coal seam was revealed. To obtain the optimum position of the roadway that bears the supporting pressure of a large coal pillar, numerical modeling was applied to analyze the relation among the stress distribution of the roadway surrounding the rock that bears the supporting pressure of a large coal pillar, the plastic zone distribution of the roadway surrounding the rock, the surrounding rock deformation, and the roadway layout position. The theoretical calculation results of the stress value, stress variation rate, and influencing range of the stress influencing angle showed that the reasonable malposition of the No. 5 coal seam roadway was an inner malposition of 4 m. The mining practice showed the following: the layout of No. 25301 panel belt roadway at the position of the inner malposition of 4 m was reasonable, the roadway support performance was favourable without deformation, and ground pressure was not obvious. The research achievement of this study is the provision of a reference for roadway layouts under similar conditions.

  2. Trends in the number of extreme hot SST days along the Canary Upwelling System due to the influence of upwelling

    Directory of Open Access Journals (Sweden)

    Xurxo Costoya

    2014-07-01

    Full Text Available Trends in the number of extreme hot days (days with SST anomalies higher than the 95% percentile were analyzed along the Canary Upwelling Ecosystem (CUE over the period 1982- 2012 by means of SST data retrieved from NOAA OI1/4 Degree. The analysis will focus on the Atlantic Iberian sector and the Moroccan sub- region where upwelling is seasonal (spring and summer are permanent, respectively. Trends were analyzed both near coast and at the adjacent ocean where the increase in the number of extreme hot days is higher. Changes are clear at annual scale with an increment of 9.8±0.3 (9.7±0.1 days dec-1 near coast and 11.6±0.2 (13.5±0.1 days dec-1 at the ocean in the Atlantic Iberian sector (Moroccan sub-region. The differences between near shore and ocean trends are especially patent for the months under intense upwelling conditions. During that upwelling season the highest differences in the excess of extreme hot days between coastal and ocean locations (Δn(#days dec-1 occur at those regions where coastal upwelling increase is high. Actually, Δn and upwelling trends have shown to be significantly correlated in both areas, R=0.88 (p<0.01 at the Atlantic Iberian sector and R=0.67 (p<0.01 at the Moroccan sub-region.

  3. Prototype of a laser guide star wavefront sensor for the Extremely Large Telescope

    Science.gov (United States)

    Patti, M.; Lombini, M.; Schreiber, L.; Bregoli, G.; Arcidiacono, C.; Cosentino, G.; Diolaiti, E.; Foppiani, I.

    2018-06-01

    The new class of large telescopes, like the future Extremely Large Telescope (ELT), are designed to work with a laser guide star (LGS) tuned to a resonance of atmospheric sodium atoms. This wavefront sensing technique presents complex issues when applied to big telescopes for many reasons, mainly linked to the finite distance of the LGS, the launching angle, tip-tilt indetermination and focus anisoplanatism. The implementation of a laboratory prototype for the LGS wavefront sensor (WFS) at the beginning of the phase study of MAORY (Multi-conjugate Adaptive Optics Relay) for ELT first light has been indispensable in investigating specific mitigation strategies for the LGS WFS issues. This paper presents the test results of the LGS WFS prototype under different working conditions. The accuracy within which the LGS images are generated on the Shack-Hartmann WFS has been cross-checked with the MAORY simulation code. The experiments show the effect of noise on centroiding precision, the impact of LGS image truncation on wavefront sensing accuracy as well as the temporal evolution of the sodium density profile and LGS image under-sampling.

  4. Extremely large magnetoresistance in few-layer graphene/boron-nitride heterostructures.

    Science.gov (United States)

    Gopinadhan, Kalon; Shin, Young Jun; Jalil, Rashid; Venkatesan, Thirumalai; Geim, Andre K; Castro Neto, Antonio H; Yang, Hyunsoo

    2015-09-21

    Understanding magnetoresistance, the change in electrical resistance under an external magnetic field, at the atomic level is of great interest both fundamentally and technologically. Graphene and other two-dimensional layered materials provide an unprecedented opportunity to explore magnetoresistance at its nascent stage of structural formation. Here we report an extremely large local magnetoresistance of ∼2,000% at 400 K and a non-local magnetoresistance of >90,000% in an applied magnetic field of 9 T at 300 K in few-layer graphene/boron-nitride heterostructures. The local magnetoresistance is understood to arise from large differential transport parameters, such as the carrier mobility, across various layers of few-layer graphene upon a normal magnetic field, whereas the non-local magnetoresistance is due to the magnetic field induced Ettingshausen-Nernst effect. Non-local magnetoresistance suggests the possibility of a graphene-based gate tunable thermal switch. In addition, our results demonstrate that graphene heterostructures may be promising for magnetic field sensing applications.

  5. Intermittent dynamics of nonlinear resistive tearing modes at extremely high magnetic Reynolds number

    International Nuclear Information System (INIS)

    Miyoshi, Takahiro; Becchaku, Masahiro; Kusano, Kanya

    2008-01-01

    Nonlinear dynamics of the resistive tearing instability in high magnetic Reynolds number (R m ) plasmas is studied by newly developing an accurate and robust resistive magnetohydrodynamic (MHD) scheme. The results show that reconnection processes strongly depend on R m . Particularly, in a high R m case, small-scale plasmoids induced by a secondary instability are intermittently generated and ejected accompanied by fast shocks. According to the intermittent processes, the reconnection rate increases intermittently at a later nonlinear stage. (author)

  6. A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.

    Science.gov (United States)

    Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua

    2016-05-01

    Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. The structure and large-scale organization of extreme cold waves over the conterminous United States

    Science.gov (United States)

    Xie, Zuowei; Black, Robert X.; Deng, Yi

    2017-12-01

    Extreme cold waves (ECWs) occurring over the conterminous United States (US) are studied through a systematic identification and documentation of their local synoptic structures, associated large-scale meteorological patterns (LMPs), and forcing mechanisms external to the US. Focusing on the boreal cool season (November-March) for 1950‒2005, a hierarchical cluster analysis identifies three ECW patterns, respectively characterized by cold surface air temperature anomalies over the upper midwest (UM), northwestern (NW), and southeastern (SE) US. Locally, ECWs are synoptically organized by anomalous high pressure and northerly flow. At larger scales, the UM LMP features a zonal dipole in the mid-tropospheric height field over North America, while the NW and SE LMPs each include a zonal wave train extending from the North Pacific across North America into the North Atlantic. The Community Climate System Model version 4 (CCSM4) in general simulates the three ECW patterns quite well and successfully reproduces the observed enhancements in the frequency of their associated LMPs. La Niña and the cool phase of the Pacific Decadal Oscillation (PDO) favor the occurrence of NW ECWs, while the warm PDO phase, low Arctic sea ice extent and high Eurasian snow cover extent (SCE) are associated with elevated SE-ECW frequency. Additionally, high Eurasian SCE is linked to increases in the occurrence likelihood of UM ECWs.

  8. Irradiated large segment allografts in limb saving surgery for extremity tumor - Philippine experience

    International Nuclear Information System (INIS)

    Wang, E.H.M.; Agcaoili, N.; Turqueza, M.S.

    1999-01-01

    Limb saving surgery has only recently become an option in the Phillipines. This has given a better comprehension of oncologic principles and from the refinement of bone-reconstruction procedures. Foremost among the latter is the use of large segment bone allografts. Large-segment allografts (LSA) are available from the Tissue and Bone Bank of the University of the Philippines (UP). After harvest, these bones are processed at the Bank, radiation-sterilized at the Philippine Nuclear Research Institute, and then stored in a -80 degree C deep freezer. We present our 4-year experience (Jan 93 - Dec 96) with LSA for limb saving surgery in musculoskeletal tumors. All patients included had: (1) malignant or aggressive extremity tumors; (2) surgery performed by the UP - Musculoskeletal Tumor Unit (UP-MUST Unit); (3) reconstructions utilizing irradiated large-segment allografts from the UP Tissue and Bone Bank; and (4) follow-up of at least one year or until death. Tumors included osteosarcoma (6) giant cell tumors (11), and metastatic lesions (3). Age ranged from 16-64 years old; 13 males and 7 females. Bones involved were the femur (12) tibia (5) and humerus (3). Average defect length was 15 cm and surgeries performed were intercalary replacement (5), resection arthrodesis (11), hemicondylar allograft (3), and allograft-prosthesis-composite (1). Follow-up ranged was from 17- 60 months or until death. Fifteen (1 5) were alive with NED (no evidence of disease), 3 were dead (2 of disease 1 of other causes), and 2 were AWED (alive with evidence of disease). Functional evaluation using the criteria of the International Society of Limb Salvage (ISOLS) was performed on 18 patients. This averaged 27.5 out of 30 points (92%) for 15 patients. Many having returned to their previous work and recreation. The 3 failures were due to infections in 2 cases (both of whom opted for amputations but who have not been fit with prostheses), and a fracture (secondary to a fall) in one case. Limb

  9. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    Science.gov (United States)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are

  10. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  11. Similarities between 2D and 3D convection for large Prandtl number

    Indian Academy of Sciences (India)

    2016-06-18

    RBC), we perform a compara- tive study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close ...

  12. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    Science.gov (United States)

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  13. Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers

    Science.gov (United States)

    Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel

    2010-01-01

    The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…

  14. A highly efficient multi-core algorithm for clustering extremely large datasets

    Directory of Open Access Journals (Sweden)

    Kraus Johann M

    2010-04-01

    Full Text Available Abstract Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.

  15. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    Science.gov (United States)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  16. Large Scale Influences on Summertime Extreme Precipitation in the Northeastern United States

    Science.gov (United States)

    Collow, Allison B. Marquardt; Bosilovich, Michael G.; Koster, Randal Dean

    2016-01-01

    Observations indicate that over the last few decades there has been a statistically significant increase in precipitation in the northeastern United States and that this can be attributed to an increase in precipitation associated with extreme precipitation events. Here a state-of-the-art atmospheric reanalysis is used to examine such events in detail. Daily extreme precipitation events defined at the 75th and 95th percentile from gridded gauge observations are identified for a selected region within the Northeast. Atmospheric variables from the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2), are then composited during these events to illustrate the time evolution of associated synoptic structures, with a focus on vertically integrated water vapor fluxes, sea level pressure, and 500-hectopascal heights. Anomalies of these fields move into the region from the northwest, with stronger anomalies present in the 95th percentile case. Although previous studies show tropical cyclones are responsible for the most intense extreme precipitation events, only 10 percent of the events in this study are caused by tropical cyclones. On the other hand, extreme events resulting from cutoff low pressure systems have increased. The time period of the study was divided in half to determine how the mean composite has changed over time. An arc of lower sea level pressure along the East Coast and a change in the vertical profile of equivalent potential temperature suggest a possible increase in the frequency or intensity of synoptic-scale baroclinic disturbances.

  17. Secret Sharing Schemes with a large number of players from Toric Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...

  18. Finite-time singularities and flow regularization in a hydromagnetic shell model at extreme magnetic Prandtl numbers

    International Nuclear Information System (INIS)

    Nigro, G; Carbone, V

    2015-01-01

    Conventional surveys on the existence of singularities in fluid systems for vanishing dissipation have hitherto tried to infer some insight by searching for spatial features developing in asymptotic regimes. This approach has not yet produced a conclusive answer. One of the difficulties preventing us from getting a definitive answer is the limitations of direct numerical simulations which do not yet have a high enough resolution so far as to properly describe spatial fine structures in asymptotic regimes. In this paper, instead of searching for spatial details, we suggest seeking a principle, that would be able to discriminate between singular or not-singular behavior, among the integral and purely dynamical properties of a fluid system. We investigate the singularities developed by a hydromagnetic shell model during the magnetohydrodynamic turbulent cascade. Our results show that when the viscosity is equal to the magnetic diffusivity (unit magnetic Prandtl number) singularities appear in a finite time. A complex behavior is observed at extreme magnetic Prandtl numbers. In particular, the singularities persist in the limit of vanishing viscosity, while a complete regularization is observed in the limit of vanishing diffusivity. This dynamics is related to differences between the magnetic and the kinetic energy cascades towards small scales. Finally a comparison between the three-dimensional and the two-dimensional cases leads to conjecture that the existence of singularities may be related to the conservation of different ideal invariants. (paper)

  19. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    Energy Technology Data Exchange (ETDEWEB)

    Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.

  20. Combinations of large-scale circulation anomalies conducive to precipitation extremes in the Czech Republic

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Marek; Müller, Miloslav

    2014-01-01

    Roč. 138, March 2014 (2014), s. 205-212 ISSN 0169-8095 R&D Projects: GA ČR(CZ) GAP209/11/1990 Institutional support: RVO:68378289 Keywords : precipitation extreme * synoptic-scale cause * re-analysis * circulation anomaly Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.844, year: 2014 http://www.sciencedirect.com/science/article/pii/S0169809513003372

  1. ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2014-01-01

    Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf

  2. Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2011-01-01

    Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.

  3. The lore of large numbers: some historical background to the anthropic principle

    International Nuclear Information System (INIS)

    Barrow, J.D.

    1981-01-01

    A description is given of how the study of numerological coincidences in physics and cosmology led first to the Large Numbers Hypothesis of Dirac and then to the suggestion of the Anthropic Principle in a variety of forms. The early history of 'coincidences' is discussed together with the work of Weyl, Eddington and Dirac. (author)

  4. The three-large-primes variant of the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2002-01-01

    textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of

  5. SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...

  6. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  7. Calculation of large Reynolds number two-dimensional flow using discrete vortices with random walk

    International Nuclear Information System (INIS)

    Milinazzo, F.; Saffman, P.G.

    1977-01-01

    The numerical calculation of two-dimensional rotational flow at large Reynolds number is considered. The method of replacing a continuous distribution of vorticity by a finite number, N, of discrete vortices is examined, where the vortices move under their mutually induced velocities plus a random component to simulate effects of viscosity. The accuracy of the method is studied by comparison with the exact solution for the decay of a circular vortex. It is found, and analytical arguments are produced in support, that the quantitative error is significant unless N is large compared with a characteristic Reynolds number. The mutually induced velocities are calculated by both direct summation and by the ''cloud in cell'' technique. The latter method is found to produce comparable error and to be much faster

  8. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    Science.gov (United States)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  9. Genomic Selection Using Extreme Phenotypes and Pre-Selection of SNPs in Large Yellow Croaker (Larimichthys crocea).

    Science.gov (United States)

    Dong, Linsong; Xiao, Shijun; Chen, Junwei; Wan, Liang; Wang, Zhiyong

    2016-10-01

    Genomic selection (GS) is an effective method to improve predictive accuracies of genetic values. However, high cost in genotyping will limit the application of this technology in some species. Therefore, it is necessary to find some methods to reduce the genotyping costs in genomic selection. Large yellow croaker is one of the most commercially important marine fish species in southeast China and Eastern Asia. In this study, genotyping-by-sequencing was used to construct the libraries for the NGS sequencing and find 29,748 SNPs in the genome. Two traits, eviscerated weight (EW) and the ratio between eviscerated weight and whole body weight (REW), were chosen to study. Two strategies to reduce the costs were proposed as follows: selecting extreme phenotypes (EP) for genotyping in reference population or pre-selecting SNPs to construct low-density marker panels in candidates. Three methods of pre-selection of SNPs, i.e., pre-selecting SNPs by absolute effects (SE), by single marker analysis (SMA), and by fixed intervals of sequence number (EL), were studied. The results showed that using EP was a feasible method to save the genotyping costs in reference population. Heritability did not seem to have obvious influences on the predictive abilities estimated by EP. Using SMA was the most feasible method to save the genotyping costs in candidates. In addition, the combination of EP and SMA in genomic selection also showed good results, especially for trait of REW. We also described how to apply the new methods in genomic selection and compared the genotyping costs before and after using the new methods. Our study may not only offer a reference for aquatic genomic breeding but also offer a reference for genomic prediction in other species including livestock and plants, etc.

  10. Extreme weather events in southern Germany - Climatological risk and development of a large-scale identification procedure

    Science.gov (United States)

    Matthies, A.; Leckebusch, G. C.; Rohlfing, G.; Ulbrich, U.

    2009-04-01

    Extreme weather events such as thunderstorms, hail and heavy rain or snowfall can pose a threat to human life and to considerable tangible assets. Yet there is a lack of knowledge about present day climatological risk and its economic effects, and its changes due to rising greenhouse gas concentrations. Therefore, parts of economy particularly sensitve to extreme weather events such as insurance companies and airports require regional risk-analyses, early warning and prediction systems to cope with such events. Such an attempt is made for southern Germany, in close cooperation with stakeholders. Comparing ERA40 and station data with impact records of Munich Re and Munich Airport, the 90th percentile was found to be a suitable threshold for extreme impact relevant precipitation events. Different methods for the classification of causing synoptic situations have been tested on ERA40 reanalyses. An objective scheme for the classification of Lamb's circulation weather types (CWT's) has proved to be most suitable for correct classification of the large-scale flow conditions. Certain CWT's have been turned out to be prone to heavy precipitation or on the other side to have a very low risk of such events. Other large-scale parameters are tested in connection with CWT's to find out a combination that has the highest skill to identify extreme precipitation events in climate model data (ECHAM5 and CLM). For example vorticity advection in 700 hPa shows good results, but assumes knowledge of regional orographic particularities. Therefore ongoing work is focused on additional testing of parameters that indicate deviations of a basic state of the atmosphere like the Eady Growth Rate or the newly developed Dynamic State Index. Evaluation results will be used to estimate the skill of the regional climate model CLM concerning the simulation of frequency and intensity of the extreme weather events. Data of the A1B scenario (2000-2050) will be examined for a possible climate change

  11. Complex active regions as the main source of extreme and large solar proton events

    Science.gov (United States)

    Ishkov, V. N.

    2013-12-01

    A study of solar proton sources indicated that solar flare events responsible for ≥2000 pfu proton fluxes mostly occur in complex active regions (CARs), i.e., in transition structures between active regions and activity complexes. Different classes of similar structures and their relation to solar proton events (SPEs) and evolution, depending on the origination conditions, are considered. Arguments in favor of the fact that sunspot groups with extreme dimensions are CARs are presented. An analysis of the flare activity in a CAR resulted in the detection of "physical" boundaries, which separate magnetic structures of the same polarity and are responsible for the independent development of each structure.

  12. Extremely Large Magnetoresistance at Low Magnetic Field by Coupling the Nonlinear Transport Effect and the Anomalous Hall Effect.

    Science.gov (United States)

    Luo, Zhaochu; Xiong, Chengyue; Zhang, Xu; Guo, Zhen-Gang; Cai, Jianwang; Zhang, Xiaozhong

    2016-04-13

    The anomalous Hall effect of a magnetic material is coupled to the nonlinear transport effect of a semiconductor material in a simple structure to achieve a large geometric magnetoresistance (MR) based on a diode-assisted mechanism. An extremely large MR (>10(4) %) at low magnetic fields (1 mT) is observed at room temperature. This MR device shows potential for use as a logic gate for the four basic Boolean logic operations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Break down of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1995-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the true Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  14. Breakdown of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1994-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in the globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  15. The holographic dual of a Riemann problem in a large number of dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)

    2016-08-22

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.

  16. Phases of a stack of membranes in a large number of dimensions of configuration space

    Science.gov (United States)

    Borelli, M. E.; Kleinert, H.

    2001-05-01

    The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.

  17. Early stage animal hoarders: are these owners of large numbers of adequately cared for cats?

    OpenAIRE

    Ramos, D.; da Cruz, N. O.; Ellis, Sarah; Hernandez, J. A. E.; Reche-Junior, A.

    2013-01-01

    Animal hoarding is a spectrum-based condition in which hoarders are often reported to have had normal and appropriate pet-keeping habits in childhood and early adulthood. Historically, research has focused largely on well established clinical animal hoarders with little work targeted towards the onset and development of animal hoarding. This study investigated whether a Brazilian population of owners of what might typically be considered an excessive number (20 or more) of cats were more like...

  18. Loss of locality in gravitational correlators with a large number of insertions

    Science.gov (United States)

    Ghosh, Sudip; Raju, Suvrat

    2017-09-01

    We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.

  19. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  20. Large Area Diamond Tribological Surfaces with Negligible Wear in Extreme Environments, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In Phase I we propose to demonstrate the processing of very large area diamond sliding bearings and tribological surfaces. The bearings and surfaces will experience...

  1. A full picture of large lepton number asymmetries of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)

    2017-04-01

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.

  2. Monitoring a large number of pesticides and transformation products in water samples from Spain and Italy.

    Science.gov (United States)

    Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara

    2017-07-01

    Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Combining large model ensembles with extreme value statistics to improve attribution statements of rare events

    Directory of Open Access Journals (Sweden)

    Sebastian Sippel

    2015-09-01

    In conclusion, our study shows that EVT and empirical estimates based on numerical simulations can indeed be used to productively inform each other, for instance to derive appropriate EVT parameters for short observational time series. Further, the combination of ensemble simulations with EVT allows us to significantly reduce the number of simulations needed for statements about the tails.

  4. DISCOVERY OF MASSIVE, MOSTLY STAR FORMATION QUENCHED GALAXIES WITH EXTREMELY LARGE Lyα EQUIVALENT WIDTHS AT z ∼ 3

    Energy Technology Data Exchange (ETDEWEB)

    Taniguchi, Yoshiaki; Kajisawa, Masaru; Kobayashi, Masakazu A. R.; Nagao, Tohru; Shioya, Yasuhiro [Research Center for Space and Cosmic Evolution, Ehime University, Bunkyo-cho, Matsuyama 790-8577 (Japan); Scoville, Nick Z.; Capak, Peter L. [Department of Astronomy, California Institute of Technology, MS 105-24, Pasadena, CA 91125 (United States); Sanders, David B. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Koekemoer, Anton M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Toft, Sune [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Mariesvej 30, DK-2100 Copenhagen (Denmark); McCracken, Henry J. [Institut d’Astrophysique de Paris, UMR7095 CNRS, Université Pierre et Marie Curie, 98 bis Boulevard Arago, F-75014 Paris (France); Le Fèvre, Olivier; Tasca, Lidia; Ilbert, Olivier [Aix Marseille Université, CNRS, LAM (Laboratoire d’Astrophysique de Marseille), UMR 7326, F-13388 Marseille (France); Sheth, Kartik [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Renzini, Alvio [Dipartimento di Astronomia, Universita di Padova, vicolo dell’Osservatorio 2, I-35122 Padua (Italy); Lilly, Simon; Carollo, Marcella; Kovač, Katarina [Department of Physics, ETH Zurich, 8093 Zurich (Switzerland); Schinnerer, Eva, E-mail: tani@cosmos.phys.sci.ehime-u.ac.jp [MPI for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); and others

    2015-08-10

    We report a discovery of six massive galaxies with both extremely large Lyα equivalent widths (EWs) and evolved stellar populations at z ∼ 3. These MAssive Extremely STrong Lyα emitting Objects (MAESTLOs) have been discovered in our large-volume systematic survey for strong Lyα emitters (LAEs) with 12 optical intermediate-band data taken with Subaru/Suprime-Cam in the COSMOS field. Based on the spectral energy distribution fitting analysis for these LAEs, it is found that these MAESTLOs have (1) large rest-frame EWs of EW{sub 0} (Lyα) ∼ 100–300 Å, (2) M{sub ⋆} ∼ 10{sup 10.5}–10{sup 11.1} M{sub ⊙}, and (3) relatively low specific star formation rates of SFR/M{sub ⋆} ∼ 0.03–1 Gyr{sup −1}. Three of the six MAESTLOs have extended Lyα emission with a radius of several kiloparsecs, although they show very compact morphology in the HST/ACS images, which correspond to the rest-frame UV continuum. Since the MAESTLOs do not show any evidence for active galactic nuclei, the observed extended Lyα emission is likely to be caused by a star formation process including the superwind activity. We suggest that this new class of LAEs, MAESTLOs, provides a missing link from star-forming to passively evolving galaxies at the peak era of the cosmic star formation history.

  5. Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours

    Energy Technology Data Exchange (ETDEWEB)

    Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)

    2014-06-27

    We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.

  6. Do neutron stars disprove multiplicative creation in Dirac's large number hypothesis

    International Nuclear Information System (INIS)

    Qadir, A.; Mufti, A.A.

    1980-07-01

    Dirac's cosmology, based on his large number hypothesis, took the gravitational coupling to be decreasing with time and matter to be created as the square of time. Since the effects predicted by Dirac's theory are very small, it is difficult to find a ''clean'' test for it. Here we show that the observed radiation from pulsars is inconsistent with Dirac's multiplicative creation model, in which the matter created is proportional to the density of matter already present. Of course, this discussion makes no comment on the ''additive creation'' model, or on the revised version of Dirac's theory. (author)

  7. Law of large numbers and central limit theorem for randomly forced PDE's

    CERN Document Server

    Shirikyan, A

    2004-01-01

    We consider a class of dissipative PDE's perturbed by an external random force. Under the condition that the distribution of perturbation is sufficiently non-degenerate, a strong law of large numbers (SLLN) and a central limit theorem (CLT) for solutions are established and the corresponding rates of convergence are estimated. It is also shown that the estimates obtained are close to being optimal. The proofs are based on the property of exponential mixing for the problem in question and some abstract SLLN and CLT for mixing-type Markov processes.

  8. On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2017-05-01

    Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.

  9. Extreme hydrometeorological events in the Peruvian Central Andes during austral summer and their relationship with the large-scale circulation

    Science.gov (United States)

    Sulca, Juan C.

    In this Master's dissertation, atmospheric circulation patterns associated with extreme hydrometeorological events in the Mantaro Basin, Peruvian Central Andes, and their teleconnections during the austral summer (December-January-February-March) are addressed. Extreme rainfall events in the Mantaro basin are related to variations of the large-scale circulation as indicated by the changing strength of the Bolivian High-Nordeste Low (BH-NL) system. Dry (wet) spells are associated with a weakening (strengthening) of the BH-NL system and reduced (enhanced) influx of moist air from the lowlands to the east due to strengthened westerly (easterly) wind anomalies at mid- and upper-tropospheric levels. At the same time extreme rainfall events of the opposite sign occur over northeastern Brazil (NEB) due to enhanced (inhibited) convective activity in conjunction with a strengthened (weakened) Nordeste Low. Cold episodes in the Mantaro Basin are grouped in three types: weak, strong and extraordinary cold episodes. Weak and strong cold episodes in the MB are mainly associated with a weakening of the BH-NL system due to tropical-extratropical interactions. Both types of cold episodes are associated with westerly wind anomalies at mid- and upper-tropospheric levels aloft the Peruvian Central Andes, which inhibit the influx of humid air masses from the lowlands to the east and hence limit the potential for development of convective cloud cover. The resulting clear sky conditions cause nighttime temperatures to drop, leading to cold extremes below the 10-percentile. Extraordinary cold episodes in the MB are associated with cold and dry polar air advection at all tropospheric levels toward the central Peruvian Andes. Therefore, weak and strong cold episodes in the MB appear to be caused by radiative cooling associated with reduced cloudiness, rather than cold air advection, while the latter plays an important role for extraordinary cold episodes only.

  10. Rain Characteristics and Large-Scale Environments of Precipitation Objects with Extreme Rain Volumes from TRMM Observations

    Science.gov (United States)

    Zhou, Yaping; Lau, William K M.; Liu, Chuntao

    2013-01-01

    This study adopts a "precipitation object" approach by using 14 years of Tropical Rainfall Measuring Mission (TRMM) Precipitation Feature (PF) and National Centers for Environmental Prediction (NCEP) reanalysis data to study rainfall structure and environmental factors associated with extreme heavy rain events. Characteristics of instantaneous extreme volumetric PFs are examined and compared to those of intermediate and small systems. It is found that instantaneous PFs exhibit a much wider scale range compared to the daily gridded precipitation accumulation range. The top 1% of the rainiest PFs contribute over 55% of total rainfall and have 2 orders of rain volume magnitude greater than those of the median PFs. We find a threshold near the top 10% beyond which the PFs grow exponentially into larger, deeper, and colder rain systems. NCEP reanalyses show that midlevel relative humidity and total precipitable water increase steadily with increasingly larger PFs, along with a rapid increase of 500 hPa upward vertical velocity beyond the top 10%. This provides the necessary moisture convergence to amplify and sustain the extreme events. The rapid increase in vertical motion is associated with the release of convective available potential energy (CAPE) in mature systems, as is evident in the increase in CAPE of PFs up to 10% and the subsequent dropoff. The study illustrates distinct stages in the development of an extreme rainfall event including: (1) a systematic buildup in large-scale temperature and moisture, (2) a rapid change in rain structure, (3) explosive growth of the PF size, and (4) a release of CAPE before the demise of the event.

  11. Variability of rRNA Operon Copy Number and Growth Rate Dynamics of Bacillus Isolated from an Extremely Oligotrophic Aquatic Ecosystem

    Science.gov (United States)

    Valdivia-Anistro, Jorge A.; Eguiarte-Fruns, Luis E.; Delgado-Sapién, Gabriela; Márquez-Zacarías, Pedro; Gasca-Pineda, Jaime; Learned, Jennifer; Elser, James J.; Olmedo-Alvarez, Gabriela; Souza, Valeria

    2016-01-01

    The ribosomal RNA (rrn) operon is a key suite of genes related to the production of protein synthesis machinery and thus to bacterial growth physiology. Experimental evidence has suggested an intrinsic relationship between the number of copies of this operon and environmental resource availability, especially the availability of phosphorus (P), because bacteria that live in oligotrophic ecosystems usually have few rrn operons and a slow growth rate. The Cuatro Ciénegas Basin (CCB) is a complex aquatic ecosystem that contains an unusually high microbial diversity that is able to persist under highly oligotrophic conditions. These environmental conditions impose a variety of strong selective pressures that shape the genome dynamics of their inhabitants. The genus Bacillus is one of the most abundant cultivable bacterial groups in the CCB and usually possesses a relatively large number of rrn operon copies (6–15 copies). The main goal of this study was to analyze the variation in the number of rrn operon copies of Bacillus in the CCB and to assess their growth-related properties as well as their stoichiometric balance (N and P content). We defined 18 phylogenetic groups within the Bacilli clade and documented a range of from six to 14 copies of the rrn operon. The growth dynamic of these Bacilli was heterogeneous and did not show a direct relation to the number of operon copies. Physiologically, our results were not consistent with the Growth Rate Hypothesis, since the copies of the rrn operon were decoupled from growth rate. However, we speculate that the diversity of the growth properties of these Bacilli as well as the low P content of their cells in an ample range of rrn copy number is an adaptive response to oligotrophy of the CCB and could represent an ecological mechanism that allows these taxa to coexist. These findings increase the knowledge of the variability in the number of copies of the rrn operon in the genus Bacillus and give insights about the

  12. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    International Nuclear Information System (INIS)

    Khare, Avinash; Saxena, Avadh

    2014-01-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations

  13. Law of Large Numbers: the Theory, Applications and Technology-based Education.

    Science.gov (United States)

    Dinov, Ivo D; Christou, Nicolas; Gould, Robert

    2009-03-01

    Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).

  14. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  15. MRI induced second-degree burn in a patient with extremely large uterine leiomyomas: A case report

    International Nuclear Information System (INIS)

    Lee, Chul Min; Kang, Bo Kyeong; Song, Soon Young; Koh, Byung Hee; Choi, Joong Sub; Lee, Won Moo

    2015-01-01

    Burns and thermal injuries related with magnetic resonance imaging (MRI) are rare. Previous literature indicates that medical devices with cable, cosmetics or tattoo are known as risk factors for burns and thermal injuries. However, there is no report of MRI-related burns in Korea. Herein, we reported a case of deep second degree burn after MRI in a 38-year-old female patient with multiple uterine leiomyomas including some that were large and degenerated. The large uterine leiomyoma-induced protruded anterior abdominal wall in direct contact with the body coil during MRI was suspected as the cause of injury, by retrospective analysis. Therefore, awareness of MRI related thermal injury is necessary to prevent this hazard, together with extreme care during MRI

  16. MRI induced second-degree burn in a patient with extremely large uterine leiomyomas: A case report

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chul Min; Kang, Bo Kyeong; Song, Soon Young; Koh, Byung Hee; Choi, Joong Sub; Lee, Won Moo [Hanyang University Medical Center, Hanyang University College of Medicine, Seoul (Korea, Republic of)

    2015-12-15

    Burns and thermal injuries related with magnetic resonance imaging (MRI) are rare. Previous literature indicates that medical devices with cable, cosmetics or tattoo are known as risk factors for burns and thermal injuries. However, there is no report of MRI-related burns in Korea. Herein, we reported a case of deep second degree burn after MRI in a 38-year-old female patient with multiple uterine leiomyomas including some that were large and degenerated. The large uterine leiomyoma-induced protruded anterior abdominal wall in direct contact with the body coil during MRI was suspected as the cause of injury, by retrospective analysis. Therefore, awareness of MRI related thermal injury is necessary to prevent this hazard, together with extreme care during MRI.

  17. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    Science.gov (United States)

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  18. Transmembrane molecular transport during versus after extremely large, nanosecond electric pulses.

    Science.gov (United States)

    Smith, Kyle C; Weaver, James C

    2011-08-19

    Recently there has been intense and growing interest in the non-thermal biological effects of nanosecond electric pulses, particularly apoptosis induction. These effects have been hypothesized to result from the widespread creation of small, lipidic pores in the plasma and organelle membranes of cells (supra-electroporation) and, more specifically, ionic and molecular transport through these pores. Here we show that transport occurs overwhelmingly after pulsing. First, we show that the electrical drift distance for typical charged solutes during nanosecond pulses (up to 100 ns), even those with very large magnitudes (up to 10 MV/m), ranges from only a fraction of the membrane thickness (5 nm) to several membrane thicknesses. This is much smaller than the diameter of a typical cell (∼16 μm), which implies that molecular drift transport during nanosecond pulses is necessarily minimal. This implication is not dependent on assumptions about pore density or the molecular flux through pores. Second, we show that molecular transport resulting from post-pulse diffusion through minimum-size pores is orders of magnitude larger than electrical drift-driven transport during nanosecond pulses. While field-assisted charge entry and the magnitude of flux favor transport during nanosecond pulses, these effects are too small to overcome the orders of magnitude more time available for post-pulse transport. Therefore, the basic conclusion that essentially all transmembrane molecular transport occurs post-pulse holds across the plausible range of relevant parameters. Our analysis shows that a primary direct consequence of nanosecond electric pulses is the creation (or maintenance) of large populations of small pores in cell membranes that govern post-pulse transmembrane transport of small ions and molecules. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Extreme precipitation variability, forage quality and large herbivore diet selection in arid environments

    Science.gov (United States)

    Cain, James W.; Gedir, Jay V.; Marshal, Jason P.; Krausman, Paul R.; Allen, Jamison D.; Duff, Glenn C.; Jansen, Brian; Morgart, John R.

    2017-01-01

    Nutritional ecology forms the interface between environmental variability and large herbivore behaviour, life history characteristics, and population dynamics. Forage conditions in arid and semi-arid regions are driven by unpredictable spatial and temporal patterns in rainfall. Diet selection by herbivores should be directed towards overcoming the most pressing nutritional limitation (i.e. energy, protein [nitrogen, N], moisture) within the constraints imposed by temporal and spatial variability in forage conditions. We investigated the influence of precipitation-induced shifts in forage nutritional quality and subsequent large herbivore responses across widely varying precipitation conditions in an arid environment. Specifically, we assessed seasonal changes in diet breadth and forage selection of adult female desert bighorn sheep Ovis canadensis mexicana in relation to potential nutritional limitations in forage N, moisture and energy content (as proxied by dry matter digestibility, DMD). Succulents were consistently high in moisture but low in N and grasses were low in N and moisture until the wet period. Nitrogen and moisture content of shrubs and forbs varied among seasons and climatic periods, whereas trees had consistently high N and moderate moisture levels. Shrubs, trees and succulents composed most of the seasonal sheep diets but had little variation in DMD. Across all seasons during drought and during summer with average precipitation, forages selected by sheep were higher in N and moisture than that of available forage. Differences in DMD between sheep diets and available forage were minor. Diet breadth was lowest during drought and increased with precipitation, reflecting a reliance on few key forage species during drought. Overall, forage selection was more strongly associated with N and moisture content than energy content. Our study demonstrates that unlike north-temperate ungulates which are generally reported to be energy-limited, N and moisture

  20. Extremely large nonsaturating magnetoresistance and ultrahigh mobility due to topological surface states in the metallic Bi2Te3 topological insulator

    Science.gov (United States)

    Shrestha, K.; Chou, M.; Graf, D.; Yang, H. D.; Lorenz, B.; Chu, C. W.

    2017-05-01

    Weak antilocalization (WAL) effects in Bi2Te3 single crystals have been investigated at high and low bulk charge-carrier concentrations. At low charge-carrier density the WAL curves scale with the normal component of the magnetic field, demonstrating the dominance of topological surface states in magnetoconductivity. At high charge-carrier density the WAL curves scale with neither the applied field nor its normal component, implying a mixture of bulk and surface conduction. WAL due to topological surface states shows no dependence on the nature (electrons or holes) of the bulk charge carriers. The observations of an extremely large nonsaturating magnetoresistance and ultrahigh mobility in the samples with lower carrier density further support the presence of surface states. The physical parameters characterizing the WAL effects are calculated using the Hikami-Larkin-Nagaoka formula. At high charge-carrier concentrations, there is a greater number of conduction channels and a decrease in the phase coherence length compared to low charge-carrier concentrations. The extremely large magnetoresistance and high mobility of topological insulators have great technological value and can be exploited in magnetoelectric sensors and memory devices.

  1. Conformal window in QCD for large numbers of colors and flavors

    International Nuclear Information System (INIS)

    Zhitnitsky, Ariel R.

    2014-01-01

    We conjecture that the phase transitions in QCD at large number of colors N≫1 is triggered by the drastic change in the instanton density. As a result of it, all physical observables also experience some sharp modification in the θ behavior. This conjecture is motivated by the holographic model of QCD where confinement–deconfinement phase transition indeed happens precisely at temperature T=T c where θ-dependence of the vacuum energy experiences a sudden change in behavior: from N 2 cos(θ/N) at T c to cosθexp(−N) at T>T c . This conjecture is also supported by recent lattice studies. We employ this conjecture to study a possible phase transition as a function of κ≡N f /N from confinement to conformal phase in the Veneziano limit N f ∼N when number of flavors and colors are large, but the ratio κ is finite. Technically, we consider an operator which gets its expectation value solely from non-perturbative instanton effects. When κ exceeds some critical value κ>κ c the integral over instanton size is dominated by small-size instantons, making the instanton computations reliable with expected exp(−N) behavior. However, when κ c , the integral over instanton size is dominated by large-size instantons, and the instanton expansion breaks down. This regime with κ c corresponds to the confinement phase. We also compute the variation of the critical κ c (T,μ) when the temperature and chemical potential T,μ≪Λ QCD slightly vary. We also discuss the scaling (x i −x j ) −γ det in the conformal phase

  2. Vicious random walkers in the limit of a large number of walkers

    International Nuclear Information System (INIS)

    Forrester, P.J.

    1989-01-01

    The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p

  3. Automated flow cytometric analysis across large numbers of samples and cell types.

    Science.gov (United States)

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  4. Generation and analysis of expressed sequence tags in the extreme large genomes Lilium and Tulipa

    Directory of Open Access Journals (Sweden)

    Shahin Arwa

    2012-11-01

    Full Text Available Abstract Background Bulbous flowers such as lily and tulip (Liliaceae family are monocot perennial herbs that are economically very important ornamental plants worldwide. However, there are hardly any genetic studies performed and genomic resources are lacking. To build genomic resources and develop tools to speed up the breeding in both crops, next generation sequencing was implemented. We sequenced and assembled transcriptomes of four lily and five tulip genotypes using 454 pyro-sequencing technology. Results Successfully, we developed the first set of 81,791 contigs with an average length of 514 bp for tulip, and enriched the very limited number of 3,329 available ESTs (Expressed Sequence Tags for lily with 52,172 contigs with an average length of 555 bp. The contigs together with singletons covered on average 37% of lily and 39% of tulip estimated transcriptome. Mining lily and tulip sequence data for SSRs (Simple Sequence Repeats showed that di-nucleotide repeats were twice more abundant in UTRs (UnTranslated Regions compared to coding regions, while tri-nucleotide repeats were equally spread over coding and UTR regions. Two sets of single nucleotide polymorphism (SNP markers suitable for high throughput genotyping were developed. In the first set, no SNPs flanking the target SNP (50 bp on either side were allowed. In the second set, one SNP in the flanking regions was allowed, which resulted in a 2 to 3 fold increase in SNP marker numbers compared with the first set. Orthologous groups between the two flower bulbs: lily and tulip (12,017 groups and among the three monocot species: lily, tulip, and rice (6,900 groups were determined using OrthoMCL. Orthologous groups were screened for common SNP markers and EST-SSRs to study synteny between lily and tulip, which resulted in 113 common SNP markers and 292 common EST-SSR. Lily and tulip contigs generated were annotated and described according to Gene Ontology terminology. Conclusions

  5. Generation and analysis of expressed sequence tags in the extreme large genomes Lilium and Tulipa.

    Science.gov (United States)

    Shahin, Arwa; van Kaauwen, Martijn; Esselink, Danny; Bargsten, Joachim W; van Tuyl, Jaap M; Visser, Richard G F; Arens, Paul

    2012-11-20

    Bulbous flowers such as lily and tulip (Liliaceae family) are monocot perennial herbs that are economically very important ornamental plants worldwide. However, there are hardly any genetic studies performed and genomic resources are lacking. To build genomic resources and develop tools to speed up the breeding in both crops, next generation sequencing was implemented. We sequenced and assembled transcriptomes of four lily and five tulip genotypes using 454 pyro-sequencing technology. Successfully, we developed the first set of 81,791 contigs with an average length of 514 bp for tulip, and enriched the very limited number of 3,329 available ESTs (Expressed Sequence Tags) for lily with 52,172 contigs with an average length of 555 bp. The contigs together with singletons covered on average 37% of lily and 39% of tulip estimated transcriptome. Mining lily and tulip sequence data for SSRs (Simple Sequence Repeats) showed that di-nucleotide repeats were twice more abundant in UTRs (UnTranslated Regions) compared to coding regions, while tri-nucleotide repeats were equally spread over coding and UTR regions. Two sets of single nucleotide polymorphism (SNP) markers suitable for high throughput genotyping were developed. In the first set, no SNPs flanking the target SNP (50 bp on either side) were allowed. In the second set, one SNP in the flanking regions was allowed, which resulted in a 2 to 3 fold increase in SNP marker numbers compared with the first set. Orthologous groups between the two flower bulbs: lily and tulip (12,017 groups) and among the three monocot species: lily, tulip, and rice (6,900 groups) were determined using OrthoMCL. Orthologous groups were screened for common SNP markers and EST-SSRs to study synteny between lily and tulip, which resulted in 113 common SNP markers and 292 common EST-SSR. Lily and tulip contigs generated were annotated and described according to Gene Ontology terminology. Two transcriptome sets were built that are valuable

  6. A Patient with Advanced Gastric Cancer Presenting with Extremely Large Uterine Fibroid Tumor

    Directory of Open Access Journals (Sweden)

    Kwang-Kuk Park

    2014-01-01

    Full Text Available Introduction. Uterine fibroid tumors (uterine leiomyomas are the most common benign uterine tumors. The incidence of uterine fibroid tumors increases in older women and may occur in more than 30% of women aged 40 to 60. Many uterine fibroid tumors are asymptomatic and are diagnosed incidentally. Case Presentation. A 44-year-old woman was admitted to our hospital with general weakness, dyspepsia, abdominal distension, and a palpable abdominal mass. An abdominal computed tomography scan showed a huge tumor mass in the abdomen which was compressing the intestine and urinary bladder. Gastroduodenal endoscopic and biopsy results showed a Borrmann type IV gastric adenocarcinoma. The patient was diagnosed with gastric cancer with disseminated peritoneal carcinomatosis. She underwent a hysterectomy with both salphingo-oophorectomy and bypass gastrojejunostomy. Simultaneous uterine fibroid tumor with other malignancies is generally observed without resection. But in this case, a surgical resection was required to resolve an intestinal obstruction and to exclude the possibility of a metastatic tumor. Conclusion. When a large pelvic or ovarian mass is detected in gastrointestinal malignancy patients, physicians try to exclude the presence of a Krukenberg tumor. If the tumors cause certain symptoms, surgical resection is recommended to resolve symptoms and to exclude a metastatic tumor.

  7. Design of focused and restrained subsets from extremely large virtual libraries.

    Science.gov (United States)

    Jamois, Eric A; Lin, Chien T; Waldman, Marvin

    2003-11-01

    With the current and ever-growing offering of reagents along with the vast palette of organic reactions, virtual libraries accessible to combinatorial chemists can reach sizes of billions of compounds or more. Extracting practical size subsets for experimentation has remained an essential step in the design of combinatorial libraries. A typical approach to computational library design involves enumeration of structures and properties for the entire virtual library, which may be unpractical for such large libraries. This study describes a new approach termed as on the fly optimization (OTFO) where descriptors are computed as needed within the subset optimization cycle and without intermediate enumeration of structures. Results reported herein highlight the advantages of coupling an ultra-fast descriptor calculation engine to subset optimization capabilities. We also show that enumeration of properties for the entire virtual library may not only be unpractical but also wasteful. Successful design of focused and restrained subsets can be achieved while sampling only a small fraction of the virtual library. We also investigate the stability of the method and compare results obtained from simulated annealing (SA) and genetic algorithms (GA).

  8. Ensemble seasonal forecast of extreme water inflow into a large reservoir

    Directory of Open Access Journals (Sweden)

    A. N. Gelfan

    2015-06-01

    Full Text Available An approach to seasonal ensemble forecast of unregulated water inflow into a large reservoir was developed. The approach is founded on a physically-based semi-distributed hydrological model ECOMAG driven by Monte-Carlo generated ensembles of weather scenarios for a specified lead-time of the forecast (3 months ahead in this study. Case study was carried out for the Cheboksary reservoir (catchment area is 374 000 km2 located on the middle Volga River. Initial watershed conditions on the forecast date (1 March for spring freshet and 1 June for summer low-water period were simulated by the hydrological model forced by daily meteorological observations several months prior to the forecast date. A spatially distributed stochastic weather generator was used to produce time-series of daily weather scenarios for the forecast lead-time. Ensemble of daily water inflow into the reservoir was obtained by driving the ECOMAG model with the generated weather time-series. The proposed ensemble forecast technique was verified on the basis of the hindcast simulations for 29 spring and summer seasons beginning from 1982 (the year of the reservoir filling to capacity to 2010. The verification criteria were used in order to evaluate an ability of the proposed technique to forecast freshet/low-water events of the pre-assigned severity categories.

  9. Volume changes of extremely large and giant intracranial aneurysms after treatment with flow diverter stents

    Energy Technology Data Exchange (ETDEWEB)

    Carneiro, Angelo; Byrne, James V. [ohn Radcliffe Hospital, Oxford Neurovascular and Neuroradiology Research Unit, Nuffield Department of Surgical Sciences, Oxford (United Kingdom); Rane, Neil; Kueker, Wilhelm; Cellerini, Martino; Corkill, Rufus [John Radcliffe Hospital, Department of Neuroradiology, Oxford (United Kingdom)

    2014-01-15

    This study assessed volume changes of unruptured large and giant aneurysms (greatest diameter >20 mm) after treatment with flow diverter (FD) stents. Clinical audit of the cases treated in a single institution, over a 5-year period. Demographic and clinical data were retrospectively collected from the hospital records. Aneurysm volumes were measured by manual outlining at sequential slices using computerised tomography (CT) or magnetic resonance (MR) angiography data. The audit included eight patients (seven females) with eight aneurysms. Four aneurysms involved the cavernous segment of the internal carotid artery (ICA), three the supraclinoid ICA and one the basilar artery. Seven patients presented with signs and symptoms of mass effect and one with seizures. All but one aneurysm was treated with a single FD stent; six aneurysms were also coiled (either before or simultaneously with FD placement). Minimum follow-up time was 6 months (mean 20 months). At follow-up, three aneurysms decreased in size, three were unchanged and two increased. Both aneurysms that increased in size showed persistent endosaccular flow at follow-up MR; in one case, failure was attributed to suboptimal position of the stent; in the other case, it was attributed to persistence of a side branch originating from the aneurysm (similar to the endoleak phenomenon of aortic aneurysms). At follow-up, five aneurysms were completely occluded; none of these increased in volume. Complete occlusion of the aneurysms leads, in most cases, to its shrinkage. In cases of late aneurysm growth or regrowth, consideration should be given to possible endoleak as the cause. (orig.)

  10. Highly efficient periodically poled KTP-isomorphs with large apertures and extreme domain aspect-ratios

    Science.gov (United States)

    Canalias, Carlota; Zukauskas, Andrius; Tjörnhamman, Staffan; Viotti, Anne-Lise; Pasiskevicius, Valdas; Laurell, Fredrik

    2018-02-01

    Since the early 1990's, a substantial effort has been devoted to the development of quasi-phased-matched (QPM) nonlinear devices, not only in ferroelectric oxides like LiNbO3, LiTaO3 and KTiOPO4 (KTP), but also in semiconductors as GaAs, and GaP. The technology to implement QPM structures in ferroelectric oxides has by now matured enough to satisfy the most basic frequency-conversion schemes without substantial modification of the poling procedures. Here, we present a qualitative leap in periodic poling techniques that allows us to demonstrate devices and frequency conversion schemes that were deemed unfeasible just a few years ago. Thanks to our short-pulse poling and coercive-field engineering techniques, we are able to demonstrate large aperture (5 mm) periodically poled Rb-doped KTP devices with a highly-uniform conversion efficiency over the whole aperture. These devices allow parametric conversion with energies larger than 60 mJ. Moreover, by employing our coercive-field engineering technique we fabricate highlyefficient sub-µm periodically poled devices, with periodicities as short as 500 nm, uniform over 1 mm-thick crystals, which allow us to realize mirrorless optical parametric oscillators with counter-propagating signal and idler waves. These novel devices present unique spectral and tuning properties, superior to those of conventional OPOs. Furthermore, our techniques are compatible with KTA, a KTP isomorph with extended transparency in the mid-IR range. We demonstrate that our highly-efficient PPKTA is superior both for mid-IR and for green light generation - as a result of improved transmission properties in the visible range. Our KTP-isomorph poling techniques leading to highly-efficient QPM devices will be presented. Their optical performance and attractive damage thresholds will be discussed.

  11. The Number Density Evolution of Extreme Emission Line Galaxies in 3D-HST: Results from a Novel Automated Line Search Technique for Slitless Spectroscopy

    Science.gov (United States)

    Maseda, Michael V.; van der Wel, Arjen; Rix, Hans-Walter; Momcheva, Ivelina; Brammer, Gabriel B.; Franx, Marijn; Lundgren, Britt F.; Skelton, Rosalind E.; Whitaker, Katherine E.

    2018-02-01

    The multiplexing capability of slitless spectroscopy is a powerful asset in creating large spectroscopic data sets, but issues such as spectral confusion make the interpretation of the data challenging. Here we present a new method to search for emission lines in the slitless spectroscopic data from the 3D-HST survey utilizing the Wide-Field Camera 3 on board the Hubble Space Telescope. Using a novel statistical technique, we can detect compact (extended) emission lines at 90% completeness down to fluxes of 1.5(3.0)× {10}-17 {erg} {{{s}}}-1 {{cm}}-2, close to the noise level of the grism exposures, for objects detected in the deep ancillary photometric data. Unlike previous methods, the Bayesian nature allows for probabilistic line identifications, namely redshift estimates, based on secondary emission line detections and/or photometric redshift priors. As a first application, we measure the comoving number density of Extreme Emission Line Galaxies (restframe [O III] λ5007 equivalent widths in excess of 500 Å). We find that these galaxies are nearly 10× more common above z ∼ 1.5 than at z ≲ 0.5. With upcoming large grism surveys such as Euclid and WFIRST, as well as grisms featured prominently on the NIRISS and NIRCam instruments on the James Webb Space Telescope, methods like the one presented here will be crucial for constructing emission line redshift catalogs in an automated and well-understood manner. This work is based on observations taken by the 3D-HST Treasury Program and the CANDELS Multi-Cycle Treasury Program with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.

  12. Radiative and Dynamical Feedbacks Limit the Climate Response to Extremely Large Volcanic Eruptions

    Science.gov (United States)

    Wade, D. C.; Vidal, C. M.; Keeble, J. M.; Griffiths, P. T.; Archibald, A. T.

    2017-12-01

    Explosive volcanic eruptions are a major cause of chemical and climatic perturbations to the atmosphere, injecting chemically and radiatively active species such as sulfur dioxide (SO2) into the stratosphere. The rate determining step for sulfate aerosol production is SO2 + OH +M → HSO3 +M. This means that chemical feedbacks on the hydroxyl radical, OH, can modulate the production rate of sulfate aerosol and hence the climate effects of large volcanic eruptions. Radiative feedbacks due to aerosols, ozone and sulfur dioxide and subsequent dynamical changes also affect the evolution of the aerosol cloud. Here we assess the role of radiative and chemical feedbacks on sulfate aerosol production using UM-UKCA, a chemistry-climate model coupled to GLOMAP, a prognostic modal aerosol model. A 200 Tg (10x Pinatubo) emission scenario is investigated. Accounting for radiative feedbacks, the SO2 lifetime is 55 days compared to 26 days in the baseline 20 Tg (1x Pinatubo) simulation. By contrast, if all radiative feedbacks are neglected the lifetime is 73 days. Including radiative feedbacks reduces the SO2 lifetime: heating of the lower stratosphere by aerosol increases upwelling and increases transport of water vapour across the tropopause, increasing OH concentrations. The maximum effective radius of the aerosol particles increases from 1.09 µm to 1.34 µm as the production of aerosol is quicker. Larger and fewer aerosol particles are produced which are less effective at scattering shortwave radiation and will more quickly sediment from the stratosphere. As a result, the resulting climate cooling by the eruption will be less strong when accounting for these radiative feedbacks. We illustrate the consequences of these effects for the 1257 Samalas eruption, the largest common era volcanic eruption, using UM-UKCA in a coupled atmosphere-ocean configuration. As a potentially halogen rich eruption, we investigate the differing ozone response to halogen-rich and halogen

  13. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    International Nuclear Information System (INIS)

    Ye Peng-Cheng; Pan Guang

    2015-01-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)

  14. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  15. Large-Eddy Simulation of a High Reynolds Number Flow Around a Cylinder Including Aeroacoustic Predictions

    Science.gov (United States)

    Spyropoulos, Evangelos T.; Holmes, Bayard S.

    1997-01-01

    The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.

  16. System for high-voltage control detectors with large number photomultipliers

    International Nuclear Information System (INIS)

    Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.

    1985-01-01

    A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed

  17. Development of a Large-Format Science-Grade CMOS Active Pixel Sensor, for Extreme Ultra Violet Spectroscopy and Imaging in Space Science

    National Research Council Canada - National Science Library

    Waltham, N. R; Prydderch, M; Mapson-Menard, H; Morrissey, Q; Turchetta, R; Pool, P; Harris, A

    2005-01-01

    We describe our programme to develop a large-format science-grade CMOS active pixel sensor for future space science missions, and in particular an extreme ultra-violet spectrograph for solar physics...

  18. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    International Nuclear Information System (INIS)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors

  19. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  20. New approaches to phylogenetic tree search and their application to large numbers of protein alignments.

    Science.gov (United States)

    Whelan, Simon

    2007-10-01

    Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.

  1. Early studies reported extreme findings with large variability: a meta-epidemiologic study in the field of endocrinology.

    Science.gov (United States)

    Wang, Zhen; Alahdab, Fares; Almasri, Jehad; Haydour, Qusay; Mohammed, Khaled; Abu Dabrh, Abd Moain; Prokop, Larry J; Alfarkh, Wedad; Lakis, Sumaya; Montori, Victor M; Murad, Mohammad Hassan

    2016-04-01

    To evaluate the presence of extreme findings and fluctuation in effect size in endocrinology. We systematically identified all meta-analyses published in 2014 in the field of endocrinology. Within each meta-analysis, the effect size of the primary binary outcome was compared across studies according to their order of publication. We pooled studies using the DerSimonian and Laird random-effects method. Heterogeneity was evaluated using the I(2) and tau(2). Twelve percent of the included 100 meta-analyses reported the largest effect size in the very first published study. The largest effect size occurred in the first 2 earliest studies in 31% of meta-analyses. When the effect size was the largest in the first published study, it was three times larger than the final pooled effect (ratio of rates, 3.26; 95% confidence interval: 1.80, 5.90). The largest heterogeneity measured by I(2) was observed in 18% of the included meta-analyses when combining the first 2 studies or 17% when combing the first 3 studies. In endocrinology, early studies reported extreme findings with large variability. This behavior of the evidence needs to be taken into account when used to formulate clinical policies. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. A decade of weather extremes

    NARCIS (Netherlands)

    Coumou, Dim; Rahmstorf, Stefan

    The ostensibly large number of recent extreme weather events has triggered intensive discussions, both in- and outside the scientific community, on whether they are related to global warming. Here, we review the evidence and argue that for some types of extreme - notably heatwaves, but also

  3. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  4. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    Science.gov (United States)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  5. Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers

    Science.gov (United States)

    Sadr, Reza; Cheng, Way Lee

    2017-11-01

    Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.

  6. Growth of equilibrium structures built from a large number of distinct component types.

    Science.gov (United States)

    Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen

    2014-09-14

    We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.

  7. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  8. Science case and requirements for the MOSAIC concept for a multi-object spectrograph for the European extremely large telescope

    International Nuclear Information System (INIS)

    Evans, C.J.; Puech, M.; Bonifacio, P.; Hammer, F.; Jagourel, P.; Caffau, E.; Disseau, K.; Flores, H.; Huertas-Company, M.; Mei, S.; Aussel, H.

    2014-01-01

    Over the past 18 months we have revisited the science requirements for a multi-object spectrograph (MOS) for the European Extremely Large Telescope (E-ELT). These efforts span the full range of E-ELT science and include input from a broad cross-section of astronomers across the ESO partner countries. In this contribution we summarise the key cases relating to studies of high-redshift galaxies, galaxy evolution, and stellar populations, with a more expansive presentation of a new case relating to detection of exoplanets in stellar clusters. A general requirement is the need for two observational modes to best exploit the large (=40 arcmin 2 ) patrol field of the E-ELT. The first mode ('high multiplex') requires integrated-light (or coarsely resolved) optical/near-IR spectroscopy of ≥100 objects simultaneously. The second ('high definition'), enabled by wide-field adaptive optics, requires spatially-resolved, near-IR of ≥10 objects/sub-fields. Within the context of the conceptual study for an ELT-MOS called MOSAIC, we summarise the top level requirements from each case and introduce the next steps in the design process. (authors)

  9. On the chromatic number of triangle-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2002-01-01

    We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c <1/3.......We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c

  10. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    Science.gov (United States)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  11. On the chromatic number of pentagon-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2007-01-01

    We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...

  12. Space Situational Awareness of Large Numbers of Payloads from a Single Deployment

    Science.gov (United States)

    2014-09-01

    challenges [12]. 3. CHIPSAT CLOUD DEPLOYMENT AND CATALOGING One of the six satellites deployed as part of the April 18, 2014, Falcon 9 launch to the ISS was...Cloud Modeling Techniques,” Journal of Spacecraft and Rockets , Vol. 33, No. 4, 550–555, 1996. 2. Swinerd, G.G., Barrows, S.P., and Crowther, R., “Short... Big Sky, Montana, 2013. 11. Voss, H.D., Dailey, J.F., Crowley, J.C., et al., “TSAT Globalstar ELaNa-5 Extremely Low-Earth Orbit (ELEO) Satellite,” 28th

  13. Large-strain time-temperature equivalence in high density polyethylene for prediction of extreme deformation and damage

    Directory of Open Access Journals (Sweden)

    Gray G.T.

    2012-08-01

    Full Text Available Time-temperature equivalence is a widely recognized property of many time-dependent material systems, where there is a clear predictive link relating the deformation response at a nominal temperature and a high strain-rate to an equivalent response at a depressed temperature and nominal strain-rate. It has been found that high-density polyethylene (HDPE obeys a linear empirical formulation relating test temperature and strain-rate. This observation was extended to continuous stress-strain curves, such that material response measured in a load frame at large strains and low strain-rates (at depressed temperatures could be translated into a temperature-dependent response at high strain-rates and validated against Taylor impact results. Time-temperature equivalence was used in conjuction with jump-rate compression tests to investigate isothermal response at high strain-rate while exluding adiabatic heating. The validated constitutive response was then applied to the analysis of Dynamic-Tensile-Extrusion of HDPE, a tensile analog to Taylor impact developed at LANL. The Dyn-Ten-Ext test results and FEA found that HDPE deformed smoothly after exiting the die, and after substantial drawing appeared to undergo a pressure-dependent shear damage mechanism at intermediate velocities, while it fragmented at high velocities. Dynamic-Tensile-Extrusion, properly coupled with a validated constitutive model, can successfully probe extreme tensile deformation and damage of polymers.

  14. Dominant Large-Scale Atmospheric Circulation Systems for the Extreme Precipitation over the Western Sichuan Basin in Summer 2013

    Directory of Open Access Journals (Sweden)

    Yamin Hu

    2015-01-01

    Full Text Available The western Sichuan Basin (WSB is a rainstorm center influenced by complicated factors such as topography and circulation. Based on multivariable empirical orthogonal function technique for extreme precipitation processes (EPP in WSB in 2013, this study reveals the dominant circulation patterns. Results indicate that the leading modes are characterized by “Saddle” and “Sandwich” structures, respectively. In one mode, a TC from the South China Sea (SCS converts into the inverted trough and steers warm moist airflow northward into the WSB. At the same time, WPSH extends westward over the Yangtze River and conveys a southeasterly warm humid flow. In the other case, WPSH is pushed westward by TC in the Western Pacific and then merges with an anomalous anticyclone over SCS. The anomalous anticyclone and WPSH form a conjunction belt and convey the warm moist southwesterly airflow to meet with the cold flow over the WSB. The configurations of WPSH and TC in the tropic and the blocking and trough in the midhigh latitudes play important roles during the EPPs over the WSB. The persistence of EPPs depends on the long-lived large-scale circulation configuration steady over the suitable positions.

  15. Large reptiles and cold temperatures: Do extreme cold spells set distributional limits for tropical reptiles in Florida?

    Science.gov (United States)

    Mazzotti, Frank J.; Cherkiss, Michael S.; Parry, Mark; Beauchamp, Jeff; Rochford, Mike; Smith, Brian J.; Hart, Kristen M.; Brandt, Laura A.

    2016-01-01

    Distributional limits of many tropical species in Florida are ultimately determined by tolerance to low temperature. An unprecedented cold spell during 2–11 January 2010, in South Florida provided an opportunity to compare the responses of tropical American crocodiles with warm-temperate American alligators and to compare the responses of nonnative Burmese pythons with native warm-temperate snakes exposed to prolonged cold temperatures. After the January 2010 cold spell, a record number of American crocodiles (n = 151) and Burmese pythons (n = 36) were found dead. In contrast, no American alligators and no native snakes were found dead. American alligators and American crocodiles behaved differently during the cold spell. American alligators stopped basking and retreated to warmer water. American crocodiles apparently continued to bask during extreme cold temperatures resulting in lethal body temperatures. The mortality of Burmese pythons compared to the absence of mortality for native snakes suggests that the current population of Burmese pythons in the Everglades is less tolerant of cold temperatures than native snakes. Burmese pythons introduced from other parts of their native range may be more tolerant of cold temperatures. We documented the direct effects of cold temperatures on crocodiles and pythons; however, evidence of long-term effects of cold temperature on their populations within their established ranges remains lacking. Mortality of crocodiles and pythons outside of their current established range may be more important in setting distributional limits.

  16. On the Behavior of ECN/RED Gateways Under a Large Number of TCP Flows: Limit Theorems

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; Makowski, Armand M

    2005-01-01

    .... As the number of competing flows becomes large, the asymptotic queue behavior at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent...

  17. Design of a prototype position actuator for the primary mirror segments of the European Extremely Large Telescope

    Science.gov (United States)

    Jiménez, A.; Morante, E.; Viera, T.; Núñez, M.; Reyes, M.

    2010-07-01

    European Extremely Large Telescope (E-ELT) based in 984 primary mirror segments achieving required optical performance; they must position relatively to adjacent segments with relative nanometer accuracy. CESA designed M1 Position Actuators (PACT) to comply with demanding performance requirements of EELT. Three PACT are located under each segment controlling three out of the plane degrees of freedom (tip, tilt, piston). To achieve a high linear accuracy in long operational displacements, PACT uses two stages in series. First stage based on Voice Coil Actuator (VCA) to achieve high accuracies in very short travel ranges, while second stage based on Brushless DC Motor (BLDC) provides large stroke ranges and allows positioning the first stage closer to the demanded position. A BLDC motor is used achieving a continuous smoothly movement compared to sudden jumps of a stepper. A gear box attached to the motor allows a high reduction of power consumption and provides a great challenge for sizing. PACT space envelope was reduced by means of two flat springs fixed to VCA. Its main characteristic is a low linear axial stiffness. To achieve best performance for PACT, sensors have been included in both stages. A rotary encoder is included in BLDC stage to close position/velocity control loop. An incremental optical encoder measures PACT travel range with relative nanometer accuracy and used to close the position loop of the whole actuator movement. For this purpose, four different optical sensors with different gratings will be evaluated. Control strategy show different internal closed loops that work together to achieve required performance.

  18. Numerical analysis of jet impingement heat transfer at high jet Reynolds number and large temperature difference

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2013-01-01

    was investigated at a jet Reynolds number of 1.66 × 105 and a temperature difference between jet inlet and wall of 1600 K. The focus was on the convective heat transfer contribution as thermal radiation was not included in the investigation. A considerable influence of the turbulence intensity at the jet inlet...... to about 100% were observed. Furthermore, the variation in stagnation point heat transfer was examined for jet Reynolds numbers in the range from 1.10 × 105 to 6.64 × 105. Based on the investigations, a correlation is suggested between the stagnation point Nusselt number, the jet Reynolds number......, and the turbulence intensity at the jet inlet for impinging jet flows at high jet Reynolds numbers. Copyright © 2013 Taylor and Francis Group, LLC....

  19. ENTROPY PRODUCTION IN COLLISIONLESS SYSTEMS. I. LARGE PHASE-SPACE OCCUPATION NUMBERS

    International Nuclear Information System (INIS)

    Barnes, Eric I.; Williams, Liliya L. R.

    2011-01-01

    Certain thermal non-equilibrium situations, outside of the astrophysical realm, suggest that entropy production extrema, instead of entropy extrema, are related to stationary states. In an effort to better understand the evolution of collisionless self-gravitating systems, we investigate the role of entropy production and develop expressions for the entropy production rate in two particular statistical families that describe self-gravitating systems. From these entropy production descriptions, we derive the requirements for extremizing the entropy production rate in terms of specific forms for the relaxation function in the Boltzmann equation. We discuss some implications of these relaxation functions and point to future work that will further test this novel thermodynamic viewpoint of collisionless relaxation.

  20. Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations

    Energy Technology Data Exchange (ETDEWEB)

    González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)

    2017-06-28

    We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.

  1. Large-eddy simulation of flow over a grooved cylinder up to transcritical Reynolds numbers

    KAUST Repository

    Cheng, W.

    2017-11-27

    We report wall-resolved large-eddy simulation (LES) of flow over a grooved cylinder up to the transcritical regime. The stretched-vortex subgrid-scale model is embedded in a general fourth-order finite-difference code discretization on a curvilinear mesh. In the present study grooves are equally distributed around the circumference of the cylinder, each of sinusoidal shape with height , invariant in the spanwise direction. Based on the two parameters, and the Reynolds number where is the free-stream velocity, the diameter of the cylinder and the kinematic viscosity, two main sets of simulations are described. The first set varies from to while fixing . We study the flow deviation from the smooth-cylinder case, with emphasis on several important statistics such as the length of the mean-flow recirculation bubble , the pressure coefficient , the skin-friction coefficient and the non-dimensional pressure gradient parameter . It is found that, with increasing at fixed , some properties of the mean flow behave somewhat similarly to changes in the smooth-cylinder flow when is increased. This includes shrinking and nearly constant minimum pressure coefficient. In contrast, while the non-dimensional pressure gradient parameter remains nearly constant for the front part of the smooth cylinder flow, shows an oscillatory variation for the grooved-cylinder case. The second main set of LES varies from to with fixed . It is found that this range spans the subcritical and supercritical regimes and reaches the beginning of the transcritical flow regime. Mean-flow properties are diagnosed and compared with available experimental data including and the drag coefficient . The timewise variation of the lift and drag coefficients are also studied to elucidate the transition among three regimes. Instantaneous images of the surface, skin-friction vector field and also of the three-dimensional Q-criterion field are utilized to further understand the dynamics of the near-surface flow

  2. Large wood recruitment processes and transported volumes in Swiss mountain streams during the extreme flood of August 2005

    Science.gov (United States)

    Steeb, Nicolas; Rickenmann, Dieter; Badoux, Alexandre; Rickli, Christian; Waldner, Peter

    2017-02-01

    The extreme flood event that occurred in August 2005 was the most costly (documented) natural hazard event in the history of Switzerland. The flood was accompanied by the mobilization of > 69,000 m3 of large wood (LW) throughout the affected area. As recognized afterward, wood played an important role in exacerbating the damages, mainly because of log jams at bridges and weirs. The present study aimed at assessing the risk posed by wood in various catchments by investigating the amount and spatial variability of recruited and transported LW. Data regarding LW quantities were obtained by field surveys, remote sensing techniques (LiDAR), and GIS analysis and was subsequently translated into a conceptual model of wood transport mass balance. Detailed wood budgets and transport diagrams were established for four study catchments of Swiss mountain streams, showing the spatial variability of LW recruitment and deposition. Despite some uncertainties with regard to parameter assumptions, the sum of reconstructed wood input and observed deposition volumes agree reasonably well. Mass wasting such as landslides and debris flows were the dominant recruitment processes in headwater streams. In contrast, LW recruitment from lateral bank erosion became significant in the lower part of mountain streams where the catchment reached a size of about 100 km2. According to our analysis, 88% of the reconstructed total wood input was fresh, i.e., coming from living trees that were recruited from adjacent areas during the event. This implies an average deadwood contribution of 12%, most of which was estimated to have been in-channel deadwood entrained during the flood event.

  3. Development of an Evaluation Methodology for Loss of Large Area Induced from Extreme Events with Malicious Origin

    International Nuclear Information System (INIS)

    Kim, S.C.; Park, J.S.; Chang, D.J.; Kim, D.H.; Lee, S.W.; Lee, Y.J.; Kim, H.W.

    2016-01-01

    Event of loss of large area (LOLA) induced from extreme external event at multi-units nuclear installation has been emerged a new challenges in the realm of nuclear safety and regulation after Fukushima Dai-Ichi accident. The relevant information and experience on evaluation methodology and regulatory requirements are rarely available and negative to share due to the security sensitivity. Most of countries has been prepared their own regulatory requirements and methodologies to evaluate impact of LOLA at nuclear power plant. In Korea, newly amended the Nuclear Safety Acts requires to assess LOLA in terms of EDMG (Extended Damage Mitigation Guideline). Korea Institute of Nuclear Safety (KINS) has performed a pilot research project to develop the methodology and regulatory review guidance on LOLA at multi-units nuclear power plant since 2014. Through this research, we proposed a methodology to identify the strategies for preventive and mitigation of the consequences of LOLA utilizing PSA techniques or its results. The proposed methodology is comprised of 8 steps including policy consideration, threat evaluation, identification of damage path sets, SSCs capacity evaluation and identification of mitigation measures and strategies. The consequence of LOLA due to malevolent aircraft crash may significantly susceptible with analysis assumptions including type of aircraft, amount of residual fuel, and hittable angle and so on, which cannot be shared overtly. This paper introduces a evaluation methodology for LOLA using PSA technique and its results. Also we provide a case study to evaluate hittable access angle using flight simulator for two types of air crafts and to identify potential path sets leading to core damage by affected SSCs within damaged area.(author).

  4. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...

  5. From Drought to Flood: Biological Responses of Large River Salmonids and Emergent Management Challenges Under California's Extreme Hydroclimatic Variability

    Science.gov (United States)

    Anderson, C.

    2017-12-01

    California's hydroclimatic regime is characterized by extreme interannual variability including periodic, multi-year droughts and winter flooding sequences. Statewide, water years 2012-2016 were characterized by extreme drought followed by likely one of the wettest years on record in water year 2017. Similar drought-flood patterns have occurred multiple times both in the contemporary empirical record and reconstructed climate records. Both the extreme magnitude and rapid succession of these hydroclimatic periods pose difficult challenges for water managers and regulatory agencies responsible for providing instream flows to protect and recover threatened and endangered fish species. Principal among these riverine fish species are federally listed winter-run and spring-run Chinook salmon (Oncorhynchus tshawytscha), Central Valley steelhead (Oncorhynchus mykiss), and the pelagic species Delta smelt (Hypomesus transpacificus). Poor instream conditions from 2012-2016 resulted in extremely low abundance estimates and poor overall fish health, and while fish monitoring results from water year 2017 are too preliminary to draw substantive conclusions, early indicators show continued downward population trends despite the historically wet conditions. This poster evaluates California's hydroclimatic conditions over the past decade and quantifies resultant impacts of the 2012-2016 drought and the extremely wet 2017 water year to both adult escapement and juvenile production estimates in California's major inland salmon rivers over that same time span. We will also examine local, state, and federal regulatory actions both in response to the extreme hydroclimatic variability and in preparation for future drought-flood sequences.

  6. A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan

    Directory of Open Access Journals (Sweden)

    Wu Jer-Yuarn

    2008-12-01

    Full Text Available Abstract Background Copy number variations (CNVs have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83% had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations.

  7. Q-factorial Gorenstein toric Fano varieties with large Picard number

    DEFF Research Database (Denmark)

    Nill, Benjamin; Øbro, Mikkel

    2010-01-01

    In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...

  8. A comment on "bats killed in large numbers at United States wind energy facilities"

    Science.gov (United States)

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.

  9. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    Science.gov (United States)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  10. Efficient high speed communications over electrical powerlines for a large number of users

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.; Tripathi, K.; Latchman, H.A. [Florida Univ., Gainesville, FL (United States). Dept. of Electrical and Computer Engineering

    2007-07-01

    Affordable broadband Internet communication is currently available for residential use via cable modem and other forms of digital subscriber lines (DSL). Powerline communication (PLC) systems were never considered seriously for communications due to their low speed and high development cost. However, due to technological advances PLCs are now spreading to local area networks and broadband over power line systems. This paper presented a newly proposed modification to the standard HomePlug 1.0 MAC protocol to make it a constant contention window-based scheme. The HomePlug 1.0 was developed based on orthogonal frequency division multiplexing (OFDM) and carrier sense multiple access with collision avoidance (CSMA/CA). It is currently the most commonly used technology of power line communications, supporting a transmission rate of up to 14 Mbps on the power line. However, the throughput performance of this original scheme becomes critical when the number of users increases. For that reason, a constant contention window based medium access control protocol algorithm of HomePlug 1.0 was proposed under the assumption that the number of active stations is known. An analytical framework based on Markov Chains was developed in order to model this modified protocol under saturation conditions. Modeling results accurately matched the actual performance of the system. This paper revealed that the performance can be improved significantly if the variables were parameterized in terms of the number of active stations. 15 refs., 1 tab., 6 figs.

  11. Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers

    International Nuclear Information System (INIS)

    Malcolm, J.; Andrews, Ph.D.

    2004-01-01

    This project has two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. Also, studies of multi-layer mixing with the existing water channel facility. Over the last twelve (12) months there has been excellent progress, detailed in this report, with both tasks. As of December 10, 2004, the air/helium facility is now complete and extensive testing and validation of diagnostics has been performed. Currently experiments with air/helium up to Atwood numbers of 0.25 (the maximum is 0.75, but the highest Reynolds numbers are at 0.25) are being performed. The progress matches the project plan, as does the budget, and we expect this to continue for 2005. With interest expressed from LLNL we have continued with initial condition studies using the water channel. This work has also progressed well, with one of the graduate Research Assistants (Mr. Nick Mueschke) visiting LLNL the past two summers to work with Dr. O. Schilling. Several journal papers are in preparation that describe the work. Two MSc.'s have been completed (Mr. Nick Mueschke, and Mr. Wayne Kraft, 12/1/03). Nick and Wayne are both pursuing Ph.D.s' funded by this DOE Alliances project. Presently three (3) Ph.D. graduate Research Assistants are supported on the project, and two (2) undergraduate Research Assistants. During the year two (2) journal papers and two (2) conference papers have been published, ten (10) presentations made at conferences, and three (3) invited presentations

  12. Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm

    Science.gov (United States)

    2017-03-01

    shows like "Agents of S.H.I.E.L.D". Inspiration can come from the imaginative minds of people or from the world around us. Swarms have demonstrated a...high degree of success. Bees , ants, termites, and naked mole rats maintain large groups that distribute tasks among individuals in order to achieve...the application layer and not the transport layer. Real- world vehicle-to-vehicle packet delivery rates for the 50-UAV swarm event were de- scribed in

  13. Analyzing the Large Number of Variables in Biomedical and Satellite Imagery

    CERN Document Server

    Good, Phillip I

    2011-01-01

    This book grew out of an online interactive offered through statcourse.com, and it soon became apparent to the author that the course was too limited in terms of time and length in light of the broad backgrounds of the enrolled students. The statisticians who took the course needed to be brought up to speed both on the biological context as well as on the specialized statistical methods needed to handle large arrays. Biologists and physicians, even though fully knowledgeable concerning the procedures used to generate microaarrays, EEGs, or MRIs, needed a full introduction to the resampling met

  14. Linear optics and projective measurements alone suffice to create large-photon-number path entanglement

    International Nuclear Information System (INIS)

    Lee, Hwang; Kok, Pieter; Dowling, Jonathan P.; Cerf, Nicolas J.

    2002-01-01

    We propose a method for preparing maximal path entanglement with a definite photon-number N, larger than two, using projective measurements. In contrast with the previously known schemes, our method uses only linear optics. Specifically, we exhibit a way of generating four-photon, path-entangled states of the form vertical bar 4,0>+ vertical bar 0,4>, using only four beam splitters and two detectors. These states are of major interest as a resource for quantum interferometric sensors as well as for optical quantum lithography and quantum holography

  15. Laboratory Study of Magnetorotational Instability and Hydrodynamic Stability at Large Reynolds Numbers

    Science.gov (United States)

    Ji, H.; Burin, M.; Schartman, E.; Goodman, J.; Liu, W.

    2006-01-01

    Two plausible mechanisms have been proposed to explain rapid angular momentum transport during accretion processes in astrophysical disks: nonlinear hydrodynamic instabilities and magnetorotational instability (MRI). A laboratory experiment in a short Taylor-Couette flow geometry has been constructed in Princeton to study both mechanisms, with novel features for better controls of the boundary-driven secondary flows (Ekman circulation). Initial results on hydrodynamic stability have shown negligible angular momentum transport in Keplerian-like flows with Reynolds numbers approaching one million, casting strong doubt on the viability of nonlinear hydrodynamic instability as a source for accretion disk turbulence.

  16. Dam risk reduction study for a number of large tailings dams in Ontario

    Energy Technology Data Exchange (ETDEWEB)

    Verma, N. [AMEC Earth and Environmental Ltd., Mississauga, ON (Canada); Small, A. [AMEC Earth and Environmental Ltd., Fredericton, NB (Canada); Martin, T. [AMEC Earth and Environmental, Burnaby, BC (Canada); Cacciotti, D. [AMEC Earth and Environmental Ltd., Sudbury, ON (Canada); Ross, T. [Vale Inco Ltd., Sudbury, ON (Canada)

    2009-07-01

    This paper discussed a risk reduction study conducted for 10 large tailings dams located at a central tailings facility in Ontario. Located near large industrial and urban developments, the tailings dams were built using an upstream method of construction that did not involve beach compaction or the provision of under-drainage. The study provided a historical background for the dam and presented results from investigations and instrumentation data. The methods used to develop the dam configurations were discussed, and remedial measures and risk assessment measures used on the dams were reviewed. The aim of the study was to address key sources of risk, which include the presence of high pore pressures and hydraulic gradients; the potential for liquefaction; slope instability; and the potential for overtopping. A borehole investigation was conducted and piezocone probes were used to obtain continuous data and determine soil and groundwater conditions. The study identified that the lower portion of the dam slopes were of concern. Erosion gullies could lead to larger scale failures, and elevated pore pressures could lead to the risk of seepage breakouts. It was concluded that remedial measures are now being conducted to ensure slope stability. 6 refs., 1 tab., 6 figs.

  17. EUPAN enables pan-genome studies of a large number of eukaryotic genomes.

    Science.gov (United States)

    Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun

    2017-08-01

    Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  18. Number of deaths due to lung diseases: How large is the problem?

    International Nuclear Information System (INIS)

    Wagener, D.K.

    1990-01-01

    The importance of lung disease as an indicator of environmentally induced adverse health effects has been recognized by inclusion among the Health Objectives for the Nation. The 1990 Health Objectives for the Nation (US Department of Health and Human Services, 1986) includes an objective that there should be virtually no new cases among newly exposed workers for four preventable occupational lung diseases-asbestosis, byssinosis, silicosis, and coal workers' pneumoconiosis. This brief communication describes two types of cause-of-death statistics- underlying and multiple cause-and demonstrates the differences between the two statistics using lung disease deaths among adult men. The choice of statistic has a large impact on estimated lung disease mortality rates. The choice of statistics also may have large effect on the estimated mortality rates due to other chromic diseases thought to be environmentally mediated. Issues of comorbidity and the way causes of death are reported become important in the interpretation of these statistics. The choice of which statistic to use when comparing data from a study population with national statistics may greatly affect the interpretations of the study findings

  19. Formation of free round jets with long laminar regions at large Reynolds numbers

    Science.gov (United States)

    Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander

    2018-04-01

    The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.

  20. Extreme value analysis of air pollution data and their comparison between two large urban regions of South America

    Directory of Open Access Journals (Sweden)

    Leila Droprinchinski Martins

    2017-12-01

    Full Text Available Sixteen years of hourly atmospheric pollutant data (1996–2011 in the Metropolitan Area of São Paulo (MASP, and seven years (2005–2011 of data measured in the Metropolitan Area of Rio de Janeiro (MARJ, were analyzed in order to study the extreme pollution events and their return period. In addition, the objective was to compare the air quality between the two largest Brazilian urban areas and provide information for decision makers, government agencies and civil society. Generalized Extreme Value (GEV and Generalized Pareto Distribution (GPD were applied to investigate the behavior of pollutants in these two regions. Although GEV and GPD are different approaches, they presented similar results. The probability of higher concentrations for CO, NO, NO2, PM10 and PM2.5 was more frequent during the winter, and O3 episodes occur most frequently during summer in the MASP. On the other hand, there is no seasonally defined behavior in MARJ for pollutants, with O3 presenting the shortest return period for high concentrations. In general, Ibirapuera and Campos Elísios stations present the highest probabilities of extreme events with high concentrations in MASP and MARJ, respectively. When the regions are compared, MASP presented higher probabilities of extreme events for all analyzed pollutants, except for NO; while O3 and PM2.5 are those with most frequent probabilities of presenting extreme episodes, in comparison other pollutants. Keywords: Air pollutants, Extreme events, Megacities, Ozone, Particulate matter

  1. Application of Evolution Strategies to the Design of Tracking Filters with a Large Number of Specifications

    Directory of Open Access Journals (Sweden)

    Jesús García Herrero

    2003-07-01

    Full Text Available This paper describes the application of evolution strategies to the design of interacting multiple model (IMM tracking filters in order to fulfill a large table of performance specifications. These specifications define the desired filter performance in a thorough set of selected test scenarios, for different figures of merit and input conditions, imposing hundreds of performance goals. The design problem is stated as a numeric search in the filter parameters space to attain all specifications or at least minimize, in a compromise, the excess over some specifications as much as possible, applying global optimization techniques coming from evolutionary computation field. Besides, a new methodology is proposed to integrate specifications in a fitness function able to effectively guide the search to suitable solutions. The method has been applied to the design of an IMM tracker for a real-world civil air traffic control application: the accomplishment of specifications defined for the future European ARTAS system.

  2. Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2010-01-01

    Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary...... layer was 3.3 due to a substantial temperature difference of 1600K between jet and wall. Results are presented which indicate very high heat flux levels and it is demonstrated that the jet inlet turbulence intensity significantly influences the heat transfer results, especially in the stagnation region....... The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas...

  3. On the strong law of large numbers for $\\varphi$-subgaussian random variables

    OpenAIRE

    Zajkowski, Krzysztof

    2016-01-01

    For $p\\ge 1$ let $\\varphi_p(x)=x^2/2$ if $|x|\\le 1$ and $\\varphi_p(x)=1/p|x|^p-1/p+1/2$ if $|x|>1$. For a random variable $\\xi$ let $\\tau_{\\varphi_p}(\\xi)$ denote $\\inf\\{a\\ge 0:\\;\\forall_{\\lambda\\in\\mathbb{R}}\\; \\ln\\mathbb{E}\\exp(\\lambda\\xi)\\le\\varphi_p(a\\lambda)\\}$; $\\tau_{\\varphi_p}$ is a norm in a space $Sub_{\\varphi_p}=\\{\\xi:\\;\\tau_{\\varphi_p}(\\xi)1$) there exist positive constants $c$ and $\\alpha$ such that for every natural number $n$ the following inequality $\\tau_{\\varphi_p}(\\sum_{i=1...

  4. Large boson number IBM calculations and their relationship to the Bohr model

    International Nuclear Information System (INIS)

    Thiamova, G.; Rowe, D.J.

    2009-01-01

    Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max =40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N=v max =40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B(E2) transition strengths ratios with increasing seniority. (orig.)

  5. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.

  6. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...

  7. Introduction to the spectral distribution method. Application example to the subspaces with a large number of quasi particles

    International Nuclear Information System (INIS)

    Arvieu, R.

    The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr

  8. Strong Law of Large Numbers for Hidden Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degrees

    Directory of Open Access Journals (Sweden)

    Huilin Huang

    2014-01-01

    Full Text Available We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.

  9. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases

    NARCIS (Netherlands)

    Heidema, A.G.; Boer, J.M.A.; Nagelkerke, N.; Mariman, E.C.M.; A, van der D.L.; Feskens, E.J.M.

    2006-01-01

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods

  10. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    International Nuclear Information System (INIS)

    Ramirez-Munoz, J.; Salinas-Rodriguez, E.; Soria, A.; Gama-Goicochea, A.

    2011-01-01

    Graphical abstract: Display Omitted Highlights: → The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. → The leading bubble wake decreases the drag on the trailing bubble. → A new semi-analytical model for the trailing bubble's drag is presented. → The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 ≤ Re ≤ 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 ≤ Er ≤ 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  11. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Munoz, J., E-mail: jrm@correo.azc.uam.mx [Departamento de Energia, Universidad Autonoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico); Salinas-Rodriguez, E.; Soria, A. [Departamento de IPH, Universidad Autonoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340 Mexico D.F. (Mexico); Gama-Goicochea, A. [Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico)

    2011-07-15

    Graphical abstract: Display Omitted Highlights: > The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. > The leading bubble wake decreases the drag on the trailing bubble. > A new semi-analytical model for the trailing bubble's drag is presented. > The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 {<=} Re {<=} 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 {<=} Er {<=} 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  12. KISCH / UL AND DURABLE DEVELOPMENT OF THE REGIONS THAT HAVE A LARGE NUMBER OF RELIGIOUS SETTLEMENTS

    Directory of Open Access Journals (Sweden)

    ENEA CONSTANTA

    2016-06-01

    Full Text Available We live in a world of contemporary kitsch, a world that merges authentic and false, good taste and meets often with bad taste. This phenomenon is găseseşte everywhere: in art, in literature cheap in media productions, shows, dialogues streets, in homes, in politics, in other words, in everyday life. Ksch site came directly in tourism, being identified in all forms of tourism worldwide, but especially religious tourism, pilgrimage with unexpected success in recent years. This paper makes an analysis of progressive evolution tourist traffic religion on the ability of the destination of religious tourism to remain competitive against all the problems, to attract visitors for their loyalty, to remain unique in terms of cultural and be a permanent balance with the environment, taking into account the environment religious phenomenon invaded Kisch, it disgraceful mixing dangerously with authentic spirituality. How trade, and rather Kisch's commercial components affect the environment, reflected in terms of religious tourism offer representatives highlighted based on a survey of major monastic ensembles in North Oltenia. Research objectives achieved in work followed, on the one hand the contributions and effects of the high number of visitors on the regions that hold religious sites, and on the other hand weighting and effects of commercial activity carried out in or near monastic establishments, be it genuine or kisck the respective regions. The study conducted took into account the northern region of Oltenia, and where demand for tourism is predominantly oriented exclusively practicing religious tourism

  13. Secondary organic aerosol formation from a large number of reactive man-made organic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Derwent, Richard G., E-mail: r.derwent@btopenworld.com [rdscientific, Newbury, Berkshire (United Kingdom); Jenkin, Michael E. [Atmospheric Chemistry Services, Okehampton, Devon (United Kingdom); Utembe, Steven R.; Shallcross, Dudley E. [School of Chemistry, University of Bristol, Bristol (United Kingdom); Murrells, Tim P.; Passant, Neil R. [AEA Environment and Energy, Harwell International Business Centre, Oxon (United Kingdom)

    2010-07-15

    A photochemical trajectory model has been used to examine the relative propensities of a wide variety of volatile organic compounds (VOCs) emitted by human activities to form secondary organic aerosol (SOA) under one set of highly idealised conditions representing northwest Europe. This study applied a detailed speciated VOC emission inventory and the Master Chemical Mechanism version 3.1 (MCM v3.1) gas phase chemistry, coupled with an optimised representation of gas-aerosol absorptive partitioning of 365 oxygenated chemical reaction product species. In all, SOA formation was estimated from the atmospheric oxidation of 113 emitted VOCs. A number of aromatic compounds, together with some alkanes and terpenes, showed significant propensities to form SOA. When these propensities were folded into a detailed speciated emission inventory, 15 organic compounds together accounted for 97% of the SOA formation potential of UK man made VOC emissions and 30 emission source categories accounted for 87% of this potential. After road transport and the chemical industry, SOA formation was dominated by the solvents sector which accounted for 28% of the SOA formation potential.

  14. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent

  15. The Love of Large Numbers: A Popularity Bias in Consumer Choice.

    Science.gov (United States)

    Powell, Derek; Yu, Jingqi; DeWolf, Melissa; Holyoak, Keith J

    2017-10-01

    Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an "intuitive statistician" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences.

  16. Normal zone detectors for a large number of inductively coupled coils. Revision 1

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. The effect on accuracy of changes in the system parameters is discussed

  17. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take plae in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed

  18. Beating the numbers through strategic intervention materials (SIMs): Innovative science teaching for large classes

    Science.gov (United States)

    Alboruto, Venus M.

    2017-05-01

    The study aimed to find out the effectiveness of using Strategic Intervention Materials (SIMs) as an innovative teaching practice in managing large Grade Eight Science classes to raise the performance of the students in terms of science process skills development and mastery of science concepts. Utilizing experimental research design with two groups of participants, which were purposefully chosen, it was obtained that there existed a significant difference in the performance of the experimental and control groups based on actual class observation and written tests on science process skills with a p-value of 0.0360 in favor of the experimental class. Further, results of written pre-test and post-test on science concepts showed that the experimental group with the mean of 24.325 (SD =3.82) performed better than the control group with the mean of 20.58 (SD =4.94), with a registered p-value of 0.00039. Therefore, the use of SIMs significantly contributed to the mastery of science concepts and the development of science process skills. Based on the findings, the following recommendations are offered: 1. that grade eight science teachers should use or adopt the SIMs used in this study to improve their students' performance; 2. training-workshop on developing SIMs must be conducted to help teachers develop SIMs to be used in their classes; 3. school administrators must allocate funds for the development and reproduction of SIMs to be used by the students in their school; and 4. every division should have a repository of SIMs for easy access of the teachers in the entire division.

  19. Detection of large numbers of novel sequences in the metatranscriptomes of complex marine microbial communities.

    Science.gov (United States)

    Gilbert, Jack A; Field, Dawn; Huang, Ying; Edwards, Rob; Li, Weizhong; Gilna, Paul; Joint, Ian

    2008-08-22

    Sequencing the expressed genetic information of an ecosystem (metatranscriptome) can provide information about the response of organisms to varying environmental conditions. Until recently, metatranscriptomics has been limited to microarray technology and random cloning methodologies. The application of high-throughput sequencing technology is now enabling access to both known and previously unknown transcripts in natural communities. We present a study of a complex marine metatranscriptome obtained from random whole-community mRNA using the GS-FLX Pyrosequencing technology. Eight samples, four DNA and four mRNA, were processed from two time points in a controlled coastal ocean mesocosm study (Bergen, Norway) involving an induced phytoplankton bloom producing a total of 323,161,989 base pairs. Our study confirms the finding of the first published metatranscriptomic studies of marine and soil environments that metatranscriptomics targets highly expressed sequences which are frequently novel. Our alternative methodology increases the range of experimental options available for conducting such studies and is characterized by an exceptional enrichment of mRNA (99.92%) versus ribosomal RNA. Analysis of corresponding metagenomes confirms much higher levels of assembly in the metatranscriptomic samples and a far higher yield of large gene families with >100 members, approximately 91% of which were novel. This study provides further evidence that metatranscriptomic studies of natural microbial communities are not only feasible, but when paired with metagenomic data sets, offer an unprecedented opportunity to explore both structure and function of microbial communities--if we can overcome the challenges of elucidating the functions of so many never-seen-before gene families.

  20. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  1. What caused a large number of fatalities in the Tohoku earthquake?

    Science.gov (United States)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  2. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    Science.gov (United States)

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association

  3. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  4. Recreating Raven's: software for systematically generating large numbers of Raven-like matrix problems with normed properties.

    Science.gov (United States)

    Matzen, Laura E; Benz, Zachary O; Dixon, Kevin R; Posey, Jamie; Kroger, James K; Speed, Ann E

    2010-05-01

    Raven's Progressive Matrices is a widely used test for assessing intelligence and reasoning ability (Raven, Court, & Raven, 1998). Since the test is nonverbal, it can be applied to many different populations and has been used all over the world (Court & Raven, 1995). However, relatively few matrices are in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. For the present study, we analyzed the types of relations that appear in Raven's original Standard Progressive Matrices (SPMs) and created a software tool that can combine the same types of relations according to parameters chosen by the experimenter, to produce very large numbers of matrix problems with specific properties. We then conducted a norming study in which the matrices we generated were compared with the actual SPMs. This study showed that the generated matrices both covered and expanded on the range of problem difficulties provided by the SPMs.

  5. Extremely large magnetoresistance and Kohler's rule in PdSn4: A complete study of thermodynamic, transport, and band-structure properties

    Science.gov (United States)

    Jo, Na Hyun; Wu, Yun; Wang, Lin-Lin; Orth, Peter P.; Downing, Savannah S.; Manni, Soham; Mou, Dixiang; Johnson, Duane D.; Kaminski, Adam; Bud'ko, Sergey L.; Canfield, Paul C.

    2017-10-01

    The recently discovered material PtSn4 is known to exhibit extremely large magnetoresistance (XMR) that also manifests Dirac arc nodes on the surface. PdSn4 is isostructural to PtSn4 with the same electron count. We report on the physical properties of high-quality single crystals of PdSn4 including specific heat, temperature- and magnetic-field-dependent resistivity and magnetization, and electronic band-structure properties obtained from angle-resolved photoemission spectroscopy (ARPES). We observe that PdSn4 has physical properties that are qualitatively similar to those of PtSn4, but find also pronounced differences. Importantly, the Dirac arc node surface state of PtSn4 is gapped out for PdSn4. By comparing these similar compounds, we address the origin of the extremely large magnetoresistance in PdSn4 and PtSn4; based on detailed analysis of the magnetoresistivity ρ (H ,T ) , we conclude that neither the carrier compensation nor the Dirac arc node surface state are the primary reason for the extremely large magnetoresistance. On the other hand, we find that, surprisingly, Kohler's rule scaling of the magnetoresistance, which describes a self-similarity of the field-induced orbital electronic motion across different length scales and is derived for a simple electronic response of metals to an applied magnetic field is obeyed over the full range of temperatures and field strengths that we explore.

  6. Extremely large magnetoresistance and Kohler's rule in PdSn4 : A complete study of thermodynamic, transport, and band-structure properties

    International Nuclear Information System (INIS)

    Jo, Na Hyun; Wu, Yun; Wang, Lin-Lin; Orth, Peter P.; Downing, Savannah S.

    2017-01-01

    The recently discovered material PtSn 4 is known to exhibit extremely large magnetoresistance (XMR) that also manifests Dirac arc nodes on the surface. PdSn 4 is isostructural to PtSn 4 with the same electron count. Here, we report on the physical properties of high-quality single crystals of PdSn 4 including specific heat, temperature- and magnetic-field-dependent resistivity and magnetization, and electronic band-structure properties obtained from angle-resolved photoemission spectroscopy (ARPES). We observe that PdSn 4 has physical properties that are qualitatively similar to those of PtSn 4 , but find also pronounced differences. Importantly, the Dirac arc node surface state of PtSn 4 is gapped out for PdSn 4 . By comparing these similar compounds, we address the origin of the extremely large magnetoresistance in PdSn 4 and PtSn 4 ; based on detailed analysis of the magnetoresistivity ρ (H , T) , we conclude that neither the carrier compensation nor the Dirac arc node surface state are the primary reason for the extremely large magnetoresistance. On the other hand, we also find that, surprisingly, Kohler's rule scaling of the magnetoresistance, which describes a self-similarity of the field-induced orbital electronic motion across different length scales and is derived for a simple electronic response of metals to an applied magnetic field is obeyed over the full range of temperatures and field strengths that we explore.

  7. Evaluating the ClimEx Single Model Large Ensemble in Comparison with EURO-CORDEX Results of Seasonal Means and Extreme Precipitation Indicators

    Science.gov (United States)

    von Trentini, F.; Schmid, F. J.; Braun, M.; Brisette, F.; Frigon, A.; Leduc, M.; Martel, J. L.; Willkofer, F.; Wood, R. R.; Ludwig, R.

    2017-12-01

    Meteorological extreme events seem to become more frequent in the present and future, and a seperation of natural climate variability and a clear climate change effect on these extreme events gains more and more interest. Since there is only one realisation of historical events, natural variability in terms of very long timeseries for a robust statistical analysis is not possible with observation data. A new single model large ensemble (SMLE), developed for the ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) is supposed to overcome this lack of data by downscaling 50 members of the CanESM2 (RCP 8.5) with the Canadian CRCM5 regional model (using the EURO-CORDEX grid specifications) for timeseries of 1950-2099 each, resulting in 7500 years of simulated climate. This allows for a better probabilistic analysis of rare and extreme events than any preceding dataset. Besides seasonal sums, several extreme indicators like R95pTOT, RX5day and others are calculated for the ClimEx ensemble and several EURO-CORDEX runs. This enables us to investigate the interaction between natural variability (as it appears in the CanESM2-CRCM5 members) and a climate change signal of those members for past, present and future conditions. Adding the EURO-CORDEX results to this, we can also assess the role of internal model variability (or natural variability) in climate change simulations. A first comparison shows similar magnitudes of variability of climate change signals between the ClimEx large ensemble and the CORDEX runs for some indicators, while for most indicators the spread of the SMLE is smaller than the spread of different CORDEX models.

  8. Strong Law of Large Numbers for Countable Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degree

    Directory of Open Access Journals (Sweden)

    Bao Wang

    2014-01-01

    Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.

  9. Emergency repair of upper extremity large soft tissue and vascular injuries with flow-through anterolateral thigh free flaps.

    Science.gov (United States)

    Zhan, Yi; Fu, Guo; Zhou, Xiang; He, Bo; Yan, Li-Wei; Zhu, Qing-Tang; Gu, Li-Qiang; Liu, Xiao-Lin; Qi, Jian

    2017-12-01

    Complex extremity trauma commonly involves both soft tissue and vascular injuries. Traditional two-stage surgical repair may delay rehabilitation and functional recovery, as well as increase the risk of infections. We report a single-stage reconstructive surgical method that repairs soft tissue defects and vascular injuries with flow-through free flaps to improve functional outcomes. Between March 2010 and December 2016 in our hospital, 5 patients with severe upper extremity trauma received single-stage reconstructive surgery, in which a flow-through anterolateral thigh free flap was applied to repair soft tissue defects and vascular injuries simultaneously. Cases of injured artery were reconstructed with the distal trunk of the descending branch of the lateral circumflex femoral artery. A segment of adjacent vein was used if there was a second artery injury. Patients were followed to evaluate their functional recoveries, and received computed tomography angiography examinations to assess peripheral circulation. Two patients had post-operative thumb necrosis; one required amputation, and the other was healed after debridement and abdominal pedicle flap repair. The other 3 patients had no major complications (infection, necrosis) to the recipient or donor sites after surgery. All the patients had achieved satisfactory functional recovery by the end of the follow-up period. Computed tomography angiography showed adequate circulation in the peripheral vessels. The success of these cases shows that one-step reconstructive surgery with flow-through anterolateral thigh free flaps can be a safe and effective treatment option for patients with complex upper extremity trauma with soft tissue defects and vascular injuries. Copyright © 2017. Published by Elsevier Ltd.

  10. Aerodynamic Effects of High Turbulence Intensity on a Variable-Speed Power-Turbine Blade With Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At

  11. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  12. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  13. Direct and large eddy simulation of turbulent heat transfer at very low Prandtl number: Application to lead–bismuth flows

    International Nuclear Information System (INIS)

    Bricteux, L.; Duponcheel, M.; Winckelmans, G.; Tiselj, I.; Bartosiewicz, Y.

    2012-01-01

    Highlights: ► We perform direct and hybrid-large eddy simulations of high Reynolds and low Prandtl turbulent wall-bounded flows with heat transfer. ► We use a state-of-the-art numerical methods with low energy dissipation and low dispersion. ► We use recent multiscalesubgrid scale models. ► Important results concerning the establishment of near wall modeling strategy in RANS are provided. ► The turbulent Prandtl number that is predicted by our simulation is different than that proposed by some correlations of the literature. - Abstract: This paper deals with the issue of modeling convective turbulent heat transfer of a liquid metal with a Prandtl number down to 0.01, which is the order of magnitude of lead–bismuth eutectic in a liquid metal reactor. This work presents a DNS (direct numerical simulation) and a LES (large eddy simulation) of a channel flow at two different Reynolds numbers, and the results are analyzed in the frame of best practice guidelines for RANS (Reynolds averaged Navier–Stokes) computations used in industrial applications. They primarily show that the turbulent Prandtl number concept should be used with care and that even recent proposed correlations may not be sufficient.

  14. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2004-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continously increase the knowledge on wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describe the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of high-sampled full-scale time series measurements...... are consistent, given the inevitabel uncertainties associated with model as well as with the extreme value data analysis. Keywords: Statistical model, extreme wind conditions, statistical analysis, turbulence, wind loading, statistical analysis, turbulence, wind loading, wind shear, wind turbines....

  15. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  16. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices

    Directory of Open Access Journals (Sweden)

    KeeHyun Park

    2015-01-01

    Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  17. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.

    2005-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continuously increase the knowledge of wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describes the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of full-scale measurements recorded with a high sampling rate...

  18. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...

  19. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    Science.gov (United States)

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  20. Summary of experience from a large number of construction inspections; Wind power plant projects; Erfarenhetsaaterfoering fraan entreprenadbesiktningar

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Bertil; Holmberg, Rikard

    2010-08-15

    This report presents a summary of experience from a large number of construction inspections of wind power projects. The working method is based on the collection of construction experience in form of questionnaires. The questionnaires were supplemented by a number of in-depth interviews to understand more in detail what is perceived to be a problem and if there were suggestions for improvements. The results in this report is based on inspection protocols from 174 wind turbines, which corresponds to about one-third of the power plants built in the time period. In total the questionnaires included 4683 inspection remarks as well as about one hundred free text comments. 52 of the 174 inspected power stations were rejected, corresponding to 30%. It has not been possible to identify any over represented type of remark as a main cause of rejection, but the rejection is usually based on a total number of remarks that is too large. The average number of remarks for a power plant is 27. Most power stations have between 20 and 35 remarks. The most common remarks concern shortcomings in marking and documentation. These are easily adjusted, and may be regarded as less serious. There are, however, a number of remarks which are recurrent and quite serious, mainly regarding gearbox, education and lightning protection. Usually these are also easily adjusted, but the consequences if not corrected can be very large. The consequences may be either shortened life of expensive components, e.g. oil problems in gear boxes, or increased probability of serious accidents, e.g. maladjusted lightning protection. In the report, comparison between power stations with various construction period, size, supplier, geography and topography is also presented. The general conclusion is that the differences are small. The results of the evaluation of questionnaires correspond well with the result of the in-depth interviews with clients. The problem that clients agreed upon as the greatest is the lack

  1. Extended data analysis strategies for high resolution imaging MS : new methods to deal with extremely large image hyperspectral datasets

    NARCIS (Netherlands)

    Klerk, L.A.; Broersen, A.; Fletcher, I.W.; Liere, van R.; Heeren, R.M.A.

    2007-01-01

    The large size of the hyperspectral datasets that are produced with modern mass spectrometric imaging techniques makes it difficult to analyze the results. Unsupervised statistical techniques are needed to extract relevant information from these datasets and reduce the data into a surveyable

  2. Pinpointing Needles in Giant Haystacks: Use of Text Mining to Reduce Impractical Screening Workload in Extremely Large Scoping Reviews

    Science.gov (United States)

    Shemilt, Ian; Simon, Antonia; Hollands, Gareth J.; Marteau, Theresa M.; Ogilvie, David; O'Mara-Eves, Alison; Kelly, Michael P.; Thomas, James

    2014-01-01

    In scoping reviews, boundaries of relevant evidence may be initially fuzzy, with refined conceptual understanding of interventions and their proposed mechanisms of action an intended output of the scoping process rather than its starting point. Electronic searches are therefore sensitive, often retrieving very large record sets that are…

  3. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Daniel Pettersson

    2016-01-01

    later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10

  4. A Genome-Wide Association Study in Large White and Landrace Pig Populations for Number Piglets Born Alive

    Science.gov (United States)

    Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst

    2015-01-01

    The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected. PMID:25781935

  5. A genome-wide association study in large white and landrace pig populations for number piglets born alive.

    Directory of Open Access Journals (Sweden)

    Sarah Bergfelder-Drüing

    Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.

  6. Aerodynamic Effects of Turbulence Intensity on a Variable-Speed Power-Turbine Blade with Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie Brynn; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The high turbulence study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Downstream total pressure and exit angle data were acquired for ten incidence angles ranging from +15.8 to 51.0. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12105 to 2.12106 and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 0.25 - 0.4 for the low Tu tests and 8- 15 for the high Tu study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitchyaw probe located in a survey plane 7 axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At the extreme positive and negative incidence angles, the data show substantial differences in the exit flow field. These differences are attributable to both the higher inlet Tu directly and to the thinner inlet endwall

  7. The long string at the stretched horizon and the entropy of large non-extremal black holes

    International Nuclear Information System (INIS)

    Mertens, Thomas G.; Verschelde, Henri; Zakharov, Valentin I.

    2016-01-01

    We discuss how long strings can arise at the stretched horizon and how they can account for the Bekenstein-Hawking entropy. We use the thermal scalar field theory to derive the asymptotic density of states and corresponding stress tensor of a microcanonical long string gas in Rindler space. We show that the equality of the Hagedorn and Hawking temperatures gives rise to the tree-level entropy of large black holes in accordance with the Bekenstein-Hawking-Wald formula.

  8. The long string at the stretched horizon and the entropy of large non-extremal black holes

    Energy Technology Data Exchange (ETDEWEB)

    Mertens, Thomas G. [Joseph Henry Laboratories, Princeton University,Washington Road, Princeton, NJ 08544 (United States); Ghent University, Department of Physics and Astronomy,Krijgslaan, 281-S9, 9000 Gent (Belgium); Verschelde, Henri [Ghent University, Department of Physics and Astronomy,Krijgslaan, 281-S9, 9000 Gent (Belgium); Zakharov, Valentin I. [ITEP,B. Cheremushkinskaya 25, Moscow 117218 (Russian Federation); Moscow Institute Phys. & Technol.,Dolgoprudny, Moscow Region 141700 (Russian Federation); School of Biomedicine, Far Eastern Federal University,Sukhanova str 8, Vladivostok 690950 (Russian Federation)

    2016-02-04

    We discuss how long strings can arise at the stretched horizon and how they can account for the Bekenstein-Hawking entropy. We use the thermal scalar field theory to derive the asymptotic density of states and corresponding stress tensor of a microcanonical long string gas in Rindler space. We show that the equality of the Hagedorn and Hawking temperatures gives rise to the tree-level entropy of large black holes in accordance with the Bekenstein-Hawking-Wald formula.

  9. Factors associated with self-reported number of teeth in a large national cohort of Thai adults

    Directory of Open Access Journals (Sweden)

    Yiengprugsawan Vasoontara

    2011-11-01

    Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.

  10. SWAP OBSERVATIONS OF THE LONG-TERM, LARGE-SCALE EVOLUTION OF THE EXTREME-ULTRAVIOLET SOLAR CORONA

    Energy Technology Data Exchange (ETDEWEB)

    Seaton, Daniel B.; De Groof, Anik; Berghmans, David; Nicula, Bogdan [Royal Observatory of Belgium-SIDC, Avenue Circulaire 3, B-1180 Brussels (Belgium); Shearer, Paul [Department of Mathematics, 2074 East Hall, University of Michigan, 530 Church Street, Ann Arbor, MI 48109-1043 (United States)

    2013-11-01

    The Sun Watcher with Active Pixels and Image Processing (SWAP) EUV solar telescope on board the Project for On-Board Autonomy 2 spacecraft has been regularly observing the solar corona in a bandpass near 17.4 nm since 2010 February. With a field of view of 54 × 54 arcmin, SWAP provides the widest-field images of the EUV corona available from the perspective of the Earth. By carefully processing and combining multiple SWAP images, it is possible to produce low-noise composites that reveal the structure of the EUV corona to relatively large heights. A particularly important step in this processing was to remove instrumental stray light from the images by determining and deconvolving SWAP's point-spread function from the observations. In this paper, we use the resulting images to conduct the first-ever study of the evolution of the large-scale structure of the corona observed in the EUV over a three year period that includes the complete rise phase of solar cycle 24. Of particular note is the persistence over many solar rotations of bright, diffuse features composed of open magnetic fields that overlie polar crown filaments and extend to large heights above the solar surface. These features appear to be related to coronal fans, which have previously been observed in white-light coronagraph images and, at low heights, in the EUV. We also discuss the evolution of the corona at different heights above the solar surface and the evolution of the corona over the course of the solar cycle by hemisphere.

  11. Analysis of a large number of clinical studies for breast cancer radiotherapy: estimation of radiobiological parameters for treatment planning

    International Nuclear Information System (INIS)

    Guerrero, M; Li, X Allen

    2003-01-01

    Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro

  12. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  13. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  14. Evaluation of two sweeping methods for estimating the number of immature Aedes aegypti (Diptera: Culicidae in large containers

    Directory of Open Access Journals (Sweden)

    Margareth Regina Dibo

    2013-07-01

    Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

  15. Production of large number of water-cooled excitation coils with improved techniques for multipole magnets of INDUS -2

    International Nuclear Information System (INIS)

    Karmarkar, M.G.; Sreeramulu, K.; Kulshreshta, P.K.

    2003-01-01

    Accelerator multipole magnets are characterized by high field gradients powered with relatively high current excitation coils. Due to space limitations in the magnet core/poles, compact coil geometry is also necessary. The coils are made of several insulated turns using hollow copper conductor. High current densities in these require cooling with low conductivity water. Additionally during operation, these are subjected to thermal fatigue stresses. A large number of coils ( Qty: 650 nos.) having different geometries were required for all multipole magnets like quadrupole (QP), sextupole (SP). Improved techniques for winding, insulation and epoxy consolidation were developed in-house at M D Lab and all coils have been successfully made. Improved technology, production techniques adopted for magnet coils and their inspection are briefly discussed in this paper. (author)

  16. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  17. CrossRef Large numbers of cold positronium atoms created in laser-selected Rydberg states using resonant charge exchange

    CERN Document Server

    McConnell, R; Kolthammer, WS; Richerme, P; Müllers, A; Walz, J; Grzonka, D; Zielinski, M; Fitzakerley, D; George, MC; Hessels, EA; Storry, CH; Weel, M

    2016-01-01

    Lasers are used to control the production of highly excited positronium atoms (Ps*). The laser light excites Cs atoms to Rydberg states that have a large cross section for resonant charge-exchange collisions with cold trapped positrons. For each trial with 30 million trapped positrons, more than 700 000 of the created Ps* have trajectories near the axis of the apparatus, and are detected using Stark ionization. This number of Ps* is 500 times higher than realized in an earlier proof-of-principle demonstration (2004 Phys. Lett. B 597 257). A second charge exchange of these near-axis Ps* with trapped antiprotons could be used to produce cold antihydrogen, and this antihydrogen production is expected to be increased by a similar factor.

  18. A Theory of Evolving Natural Constants Based on the Unification of General Theory of Relativity and Dirac's Large Number Hypothesis

    International Nuclear Information System (INIS)

    Peng Huanwu

    2005-01-01

    Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.

  19. A very large number of GABAergic neurons are activated in the tuberal hypothalamus during paradoxical (REM sleep hypersomnia.

    Directory of Open Access Journals (Sweden)

    Emilie Sapin

    Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.

  20. Eosinophils may play regionally disparate roles in influencing IgA(+) plasma cell numbers during large and small intestinal inflammation.

    Science.gov (United States)

    Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J

    2016-05-31

    Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of

  1. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    Science.gov (United States)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  2. The necessity of and policy suggestions for implementing a limited number of large scale, fully integrated CCS demonstrations in China

    International Nuclear Information System (INIS)

    Li Zheng; Zhang Dongjie; Ma Linwei; West, Logan; Ni Weidou

    2011-01-01

    CCS is seen as an important and strategic technology option for China to reduce its CO 2 emission, and has received tremendous attention both around the world and in China. Scholars are divided on the role CCS should play, making the future of CCS in China highly uncertain. This paper presents the overall circumstances for CCS development in China, including the threats and opportunities for large scale deployment of CCS, the initial barriers and advantages that China currently possesses, as well as the current progress of CCS demonstration in China. The paper proposes the implementation of a limited number of larger scale, fully integrated CCS demonstration projects and explains the potential benefits that could be garnered. The problems with China's current CCS demonstration work are analyzed, and some targeted policies are proposed based on those observations. These policy suggestions can effectively solve these problems, help China gain the benefits with CCS demonstration soon, and make great contributions to China's big CO 2 reduction mission. - Highlights: → We analyze the overall circumstances for CCS development in China in detail. → China can garner multiple benefits by conducting several large, integrated CCS demos. → We present the current progress in CCS demonstration in China in detail. → Some problems exist with China's current CCS demonstration work. → Some focused policies are suggested to improve CCS demonstration in China.

  3. Fluctuations of nuclear cross sections in the region of strong overlapping resonances and at large number of open channels

    International Nuclear Information System (INIS)

    Kun, S.Yu.

    1985-01-01

    On the basis of the symmetrized Simonius representation of the S matrix statistical properties of its fluctuating component in the presence of direct reactions are investigated. The case is considered where the resonance levels are strongly overlapping and there is a lot of open channels, assuming that compound-nucleus cross sections which couple different channels are equal. It is shown that using the averaged unitarity condition on the real energy axis one can eliminate both resonance-resonance and channel-channel correlations from partial r transition amplitudes. As a result, we derive the basic points of the Epicson fluctuation theory of nuclear cross sections, independently of the relation between the resonance overlapping and the number of open channels, and the validity of the Hauser-Feshbach model is established. If the number of open channels is large, the time of uniform population of compound-nucleus configurations, for an open excited nuclear system, is much smaller than the Poincare time. The life time of compound nucleus is discussed

  4. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    Science.gov (United States)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  5. Explaining the large numbers by a hierarchy of ''universes'': a unified theory of strong and gravitational interactions

    International Nuclear Information System (INIS)

    Caldirola, P.; Recami, E.

    1978-01-01

    By assuming covariance of physical laws under (discrete) dilatations, strong and gravitational interactions have been described in a unified way. In terms of the (additional, discrete) ''dilatational'' degree of freedom, our cosmos as well as hadrons can be considered as different states of the same system, or rather as similar systems. Moreover, a discrete hierarchy can be defined of ''universes'' which are governed by force fields with strengths inversely proportional to the ''universe'' radii. Inside each ''universe'' an equivalence principle holds, so that its characteristic field can be geometrized there. It is thus easy to derive a whole ''numerology'', i.e. relations among numbers analogous to the so-called Weyl-Eddington-Dirac ''large numbers''. For instance, the ''Planck mass'' happens to be nothing but the (average) magnitude of the strong charge of the hadron quarks. However, our ''numerology'' connects the (gravitational) macrocosmos with the (strong) microcosmos, rather than with the electromagnetic ones (as, e.g., in Dirac's version). Einstein-type scaled equations (with ''cosmological'' term) are suggested for the hadron interior, which - incidentally - yield a (classical) quark confinement in a very natural way and are compatible with the ''asymptotic freedom''. At last, within a ''bi-scale'' theory, further equations are proposed that provide a priori a classical field theory of strong interactions (between different hadrons). The relevant sections are 5.2, 7 and 8. (author)

  6. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    Energy Technology Data Exchange (ETDEWEB)

    Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)

    2011-12-15

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)

  7. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    Science.gov (United States)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  8. Extremely large anthropogenic-aerosol contribution to total aerosol load over the Bay of Bengal during winter season

    Directory of Open Access Journals (Sweden)

    D. G. Kaskaoutis

    2011-07-01

    Full Text Available Ship-borne observations of spectral aerosol optical depth (AOD have been carried out over the entire Bay of Bengal (BoB as part of the W-ICARB cruise campaign during the period 27 December 2008–30 January 2009. The results reveal a pronounced temporal and spatial variability in the optical characteristics of aerosols mainly due to anthropogenic emissions and their dispersion controlled by local meteorology. The highest aerosol amount, with mean AOD500>0.4, being even above 1.0 on specific days, is found close to the coastal regions in the western and northern parts of BoB. In these regions the Ångström exponent is also found to be high (~1.2–1.25 indicating transport of strong anthropogenic emissions from continental regions, while very high AOD500 (0.39±0.07 and α380–870 values (1.27±0.09 are found over the eastern BoB. Except from the large α380–870 values, an indication of strong fine-mode dominance is also observed from the AOD curvature, which is negative in the vast majority of the cases, suggesting dominance of an anthropogenic-pollution aerosol type. On the other hand, clean maritime conditions are rather rare over the region, while the aerosol types are further examined through a classification scheme based on the relationship between α and dα. It was found that even for the same α values the fine-mode dominance is larger for higher AODs showing the strong continental influence over the marine environment of BoB. Furthermore, there is also an evidence of aerosol-size growth under more turbid conditions indicative of coagulation and/or humidification over specific BoB regions. The results obtained using OPAC model show significant fraction of soot aerosols (~6 %–8 % over the eastern and northwestern BoB, while coarse-mode sea salt particles are found to dominate in the southern parts of BoB.

  9. A short-term extremely low frequency electromagnetic field exposure increases circulating leukocyte numbers and affects HPA-axis signaling in mice

    NARCIS (Netherlands)

    Kleijn, de Stan; Ferwerda, Gerben; Wiese, Michelle; Trentelman, Jos; Cuppen, Jan; Kozicz, Tamas; Jager, de Linda; Hermans, Peter W.M.; Kemenade, van Lidy

    2016-01-01

    There is still uncertainty whether extremely low frequency electromagnetic fields (ELF-EMF) can induce health effects like immunomodulation. Despite evidence obtained in vitro, an unambiguous association has not yet been established in vivo. Here, mice were exposed to ELF-EMF for 1, 4, and 24

  10. Large-Scale Skin Resurfacing of the Upper Extremity in Pediatric Patients Using a Pre-Expanded Intercostal Artery Perforator Flap.

    Science.gov (United States)

    Wei, Jiao; Herrler, Tanja; Gu, Bin; Yang, Mei; Li, Qingfeng; Dai, Chuanchang; Xie, Feng

    2018-05-01

    The repair of extensive upper limb skin lesions in pediatric patients is extremely challenging due to substantial limitations of flap size and donor-site morbidity. We aimed to create an oversize preexpanded flap based on intercostal artery perforators for large-scale resurfacing of the upper extremity in children. Between March 2013 and August 2016, 11 patients underwent reconstructive treatment for extensive skin lesions in the upper extremity using a preexpanded intercostal artery perforator flap. Preoperatively, 2 to 4 candidate perforators were selected as potential pedicle vessels based on duplex ultrasound examination. After tissue expander implantation in the thoracodorsal area, regular saline injections were performed until the expanded flap was sufficient in size. Then, a pedicled flap was formed to resurface the skin lesion of the upper limb. The pedicles were transected 3 weeks after flap transfer. Flap survival, complications, and long-term outcome were evaluated. The average time of tissue expansion was 133 days with a mean final volume of 1713 mL. The thoracoabdominal flaps were based on 2 to 6 pedicles and used to resurface a mean skin defect area of 238 cm ranging from 180 to 357 cm. In all cases, primary donor-site closure was achieved. Marginal necrosis was seen in 5 cases. The reconstructed limbs showed satisfactory outcome in both aesthetic and functional aspects. The preexpanded intercostal artery perforator flap enables 1-block repair of extensive upper limb skin lesions. Due to limited donor-site morbidity and a pedicled technique, this resurfacing approach represents a useful tool especially in pediatric patients.

  11. Spitzer SAGE-Spec: Near infrared spectroscopy, dust shells, and cool envelopes in extreme Large Magellanic Cloud asymptotic giant branch stars

    Energy Technology Data Exchange (ETDEWEB)

    Blum, R. D. [NOAO, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); Srinivasan, S.; Kemper, F.; Ling, B. [Academia Sinica, Institute of Astronomy and Astrophysics, 11F of Astronomy-Mathematics Building, NTU/AS, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan (China); Volk, K. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)

    2014-11-01

    K-band spectra are presented for a sample of 39 Spitzer Infrared Spectrograph (IRS) SAGE-Spec sources in the Large Magellanic Cloud. The spectra exhibit characteristics in very good agreement with their positions in the near-infrared—Spitzer color-magnitude diagrams and their properties as deduced from the Spitzer IRS spectra. Specifically, the near-infrared spectra show strong atomic and molecular features representative of oxygen-rich and carbon-rich asymptotic giant branch stars, respectively. A small subset of stars was chosen from the luminous and red extreme ''tip'' of the color-magnitude diagram. These objects have properties consistent with dusty envelopes but also cool, carbon-rich ''stellar'' cores. Modest amounts of dust mass loss combine with the stellar spectral energy distribution to make these objects appear extreme in their near-infrared and mid-infrared colors. One object in our sample, HV 915, a known post-asymptotic giant branch star of the RV Tau type, exhibits CO 2.3 μm band head emission consistent with previous work that demonstrates that the object has a circumstellar disk.

  12. Spitzer SAGE-Spec: Near infrared spectroscopy, dust shells, and cool envelopes in extreme Large Magellanic Cloud asymptotic giant branch stars

    International Nuclear Information System (INIS)

    Blum, R. D.; Srinivasan, S.; Kemper, F.; Ling, B.; Volk, K.

    2014-01-01

    K-band spectra are presented for a sample of 39 Spitzer Infrared Spectrograph (IRS) SAGE-Spec sources in the Large Magellanic Cloud. The spectra exhibit characteristics in very good agreement with their positions in the near-infrared—Spitzer color-magnitude diagrams and their properties as deduced from the Spitzer IRS spectra. Specifically, the near-infrared spectra show strong atomic and molecular features representative of oxygen-rich and carbon-rich asymptotic giant branch stars, respectively. A small subset of stars was chosen from the luminous and red extreme ''tip'' of the color-magnitude diagram. These objects have properties consistent with dusty envelopes but also cool, carbon-rich ''stellar'' cores. Modest amounts of dust mass loss combine with the stellar spectral energy distribution to make these objects appear extreme in their near-infrared and mid-infrared colors. One object in our sample, HV 915, a known post-asymptotic giant branch star of the RV Tau type, exhibits CO 2.3 μm band head emission consistent with previous work that demonstrates that the object has a circumstellar disk.

  13. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  14. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    Science.gov (United States)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  15. Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park

    Directory of Open Access Journals (Sweden)

    Ferreira Sanette L.A.

    2016-09-01

    Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.

  16. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies

    Science.gov (United States)

    2014-01-01

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054

  17. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies.

    Science.gov (United States)

    Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen

    2014-06-21

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.

  18. Attenuation of contaminant plumes in homogeneous aquifers: Sensitivity to source function at moderate to large peclet numbers

    International Nuclear Information System (INIS)

    Selander, W.N.; Lane, F.E.; Rowat, J.H.

    1995-05-01

    A groundwater mass transfer calculation is an essential part of the performance assessment for radioactive waste disposal facilities. AECL's IRUS (Intrusion Resistant Underground Structure) facility, which is designed for the near-surface disposal of low-level radioactive waste (LLRW), is to be situated in the sandy overburden at AECL's Chalk River Laboratories. Flow in the sandy aquifers at the proposed IRUS site is relatively homogeneous and advection-dominated (large Peclet numbers). Mass transfer along the mean direction of flow from the IRUS site may be described using the one-dimensional advection-dispersion equation, for which a Green's function representation of downstream radionuclide flux is convenient. This report shows that in advection-dominated aquifers, dispersive attenuation of initial contaminant releases depends principally on two time scales: the source duration and the pulse breakthrough time. Numerical investigation shows further that the maximum downstream flux or concentration depends on these time scales in a simple characteristic way that is minimally sensitive to the shape of the initial source pulse. (author). 11 refs., 2 tabs., 3 figs

  19. Large Eddy Simulation study of the development of finite-channel lock-release currents at high Grashof numbers

    Science.gov (United States)

    Ooi, Seng-Keat

    2005-11-01

    Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.

  20. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa.

    Science.gov (United States)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M

    2017-08-01

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.

  1. Neoadjuvant Interdigitated Chemoradiotherapy Using Mesna, Doxorubicin, and Ifosfamide for Large, High-grade, Soft Tissue Sarcomas of the Extremity: Improved Efficacy and Reduced Toxicity.

    Science.gov (United States)

    Chowdhary, Mudit; Sen, Neilayan; Jeans, Elizabeth B; Miller, Luke; Batus, Marta; Gitelis, Steven; Wang, Dian; Abrams, Ross A

    2018-05-18

    Patients with large, high-grade extremity soft tissue sarcoma (STS) are at high risk for both local and distant recurrence. RTOG 95-14, using a regimen of neoadjuvant interdigitated chemoradiotherapy with mesna, doxorubicin, ifosfamide, and dacarbazine followed by surgery and 3 cycles of adjuvant mesna, doxorubicin, ifosfamide, and dacarbazine, demonstrated high rates of disease control at the cost of significant toxicity (83% grade 4, 5% grade 5). As such, this regimen has not been widely adopted. Herein, we report our institutional outcomes utilizing a modified interdigitated chemoradiotherapy regimen, without dacarbazine, and current radiotherapy planning and delivery techniques for high-risk STS. Adults with large (≥5 cm; median, 12.9 cm), grade 3 extremity STS who were prospectively treated as part of our institutional standard of care from 2008 to 2016 are included. Neoadjuvant chemoradiotherapy consisted of 3 cycles of mesna, doxorubicin, and ifosfamide (MAI) and 44 Gy (22 Gy in 11 fractions between cycles of MAI) after which patients underwent surgical resection and received 3 additional cycles of MAI. Twenty-six patients received the MAI treatment protocol. At a median follow-up of 47.3 months, 23 (88.5%) patients are still alive. Three year locoregional recurrence-free survival, disease-free survival, and overall survival are 95.0%, 64.0%, and 95.0%, respectively. There have been no therapy-related deaths or secondary malignancies. The nonhematologic grade 4 toxicity rate was 7.7%. Neoadjuvant interdigitated MAI radiotherapy followed by resection and 3 cycles of adjuvant MAI has resulted in acceptable and manageable toxicity and highly favorable survival in patients at greatest risk for treatment failure.

  2. Large Extremity Peripheral Nerve Repair

    Science.gov (United States)

    2015-10-01

    ICAM-1-coupled signaling pathways in astrocytes converge to cyclic AMP response element-binding protein phosphorylation and TNF-alpha secretion. J...D, Apap-Bologna A, Kemp G. A dye-photosensitized reaction that generates stable protein-protein crosslinks. Analytical biochemistry . 1989 May 15;179(1

  3. Large Extremity Peripheral Nerve Repair

    Science.gov (United States)

    2016-12-01

    this (Figure 14). Task 2g . Decision on wrap/fixation method for AvanceΤΜ nerve graft studies in rodent model. (Month 16, All PI’s) This decision...completed 3g . Preparation of manuscript based on Task 3 studies and evaluation for recommendation for human studies. This final task will be...significantly reduced mean hospital stay, dressings changes, mean time to epithelialisation, reduced pain, increased mobility . Patient and surgeon 666 N

  4. Email-Based Informed Consent: Innovative Method for Reaching Large Numbers of Subjects for Data Mining Research

    Science.gov (United States)

    Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary

    2015-01-01

    Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP

  5. Small on the Left, Large on the Right: Numbers Orient Visual Attention onto Space in Preverbal Infants

    Science.gov (United States)

    Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola

    2016-01-01

    Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…

  6. Are rheumatoid arthritis patients discernible from other early arthritis patients using 1.5T extremity magnetic resonance imaging? a large cross-sectional study.

    Science.gov (United States)

    Stomp, Wouter; Krabben, Annemarie; van der Heijde, Désirée; Huizinga, Tom W J; Bloem, Johan L; van der Helm-van Mil, Annette H M; Reijnierse, Monique

    2014-08-01

    Magnetic resonance imaging (MRI) is increasingly used in rheumatoid arthritis (RA) research. A European League Against Rheumatism (EULAR) task force recently suggested that MRI can improve the certainty of RA diagnosis. Because this recommendation may reflect a tendency to use MRI in daily practice, thorough studies on the value of MRI are required. Thus far no large studies have evaluated the accuracy of MRI to differentiate early RA from other patients with early arthritis. We performed a large cross-sectional study to determine whether patients who are clinically classified with RA differ in MRI features compared to patients with other diagnoses. In our study, 179 patients presenting with early arthritis (median symptom duration 15.4 weeks) underwent 1.5T extremity MRI of unilateral wrist, metacarpophalangeal, and metatarsophalangeal joints according to our arthritis protocol, the foot without contrast. Images were scored according to OMERACT Rheumatoid Arthritis Magnetic Resonance Imaging Scoring (RAMRIS) by 2 independent readers. Tenosynovitis was also assessed. The main outcome was fulfilling the 1987 American College of Rheumatology (ACR) criteria for RA. Test characteristics and areas under the receiver-operator-characteristic curves (AUC) were evaluated. In subanalyses, the 2010 ACR/EULAR criteria were used as outcome, and analyses were stratified for anticitrullinated protein antibodies (ACPA). The ACR 1987 criteria were fulfilled in 43 patients (24.0%). Patients with RA had higher scores for synovitis, tenosynovitis, and bone marrow edema (BME) than patients without RA (p arthritis patients.

  7. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    NARCIS (Netherlands)

    Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.

    2009-01-01

    The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that

  8. Application of the extreme value theory to beam loss estimates in the SPIRAL2 linac based on large scale Monte Carlo computations

    Directory of Open Access Journals (Sweden)

    R. Duperrier

    2006-04-01

    Full Text Available The influence of random perturbations of high intensity accelerator elements on the beam losses is considered. This paper presents the error sensitivity study which has been performed for the SPIRAL2 linac in order to define the tolerances for the construction. The proposed driver aims to accelerate a 5 mA deuteron beam up to 20   A MeV and a 1 mA ion beam for q/A=1/3 up to 14.5 A MeV. It is a continuous wave regime linac, designed for a maximum efficiency in the transmission of intense beams and a tunable energy. It consists in an injector (two   ECRs   sources+LEBTs with the possibility to inject from several sources+radio frequency quadrupole followed by a superconducting section based on an array of independently phased cavities where the transverse focalization is performed with warm quadrupoles. The correction scheme and the expected losses are described. The extreme value theory is used to estimate the expected beam losses. The described method couples large scale computations to obtain probability distribution functions. The bootstrap technique is used to provide confidence intervals associated to the beam loss predictions. With such a method, it is possible to measure the risk to loose a few watts in this high power linac (up to 200 kW.

  9. Extreme winter warming events more negatively impact small rather than large soil fauna: shift in community composition explained by traits not taxa.

    NARCIS (Netherlands)

    Bokhorst, S.F.; Phoenix, G.K.; Bjerke, J.W.; Callaghan, T.V.; Huyer-Brugman, F.A.; Berg, M.P.

    2012-01-01

    Extreme weather events can have negative impacts on species survival and community structure when surpassing lethal thresholds. Extreme winter warming events in the Arctic rapidly melt snow and expose ecosystems to unseasonably warm air (2-10 °C for 2-14 days), but returning to cold winter climate

  10. Experimental observation of pulsating instability under acoustic field in downward-propagating flames at large Lewis number

    KAUST Repository

    Yoon, Sung Hwan

    2017-10-12

    According to previous theory, pulsating propagation in a premixed flame only appears when the reduced Lewis number, β(Le-1), is larger than a critical value (Sivashinsky criterion: 4(1 +3) ≈ 11), where β represents the Zel\\'dovich number (for general premixed flames, β ≈ 10), which requires Lewis number Le > 2.1. However, few experimental observation have been reported because the critical reduced Lewis number for the onset of pulsating instability is beyond what can be reached in experiments. Furthermore, the coupling with the unavoidable hydrodynamic instability limits the observation of pure pulsating instabilities in flames. Here, we describe a novel method to observe the pulsating instability. We utilize a thermoacoustic field caused by interaction between heat release and acoustic pressure fluctuations of the downward-propagating premixed flames in a tube to enhance conductive heat loss at the tube wall and radiative heat loss at the open end of the tube due to extended flame residence time by diminished flame surface area, i.e., flat flame. The thermoacoustic field allowed pure observation of the pulsating motion since the primary acoustic force suppressed the intrinsic hydrodynamic instability resulting from thermal expansion. By employing this method, we have provided new experimental observations of the pulsating instability for premixed flames. The Lewis number (i.e., Le ≈ 1.86) was less than the critical value suggested previously.

  11. Mandelbrot's Extremism

    NARCIS (Netherlands)

    Beirlant, J.; Schoutens, W.; Segers, J.J.J.

    2004-01-01

    In the sixties Mandelbrot already showed that extreme price swings are more likely than some of us think or incorporate in our models.A modern toolbox for analyzing such rare events can be found in the field of extreme value theory.At the core of extreme value theory lies the modelling of maxima

  12. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  13. The Limits and Possibilities of International Large-Scale Assessments. Education Policy Brief. Volume 9, Number 2, Spring 2011

    Science.gov (United States)

    Rutkowski, David J.; Prusinski, Ellen L.

    2011-01-01

    The staff of the Center for Evaluation & Education Policy (CEEP) at Indiana University is often asked about how international large-scale assessments influence U.S. educational policy. This policy brief is designed to provide answers to some of the most frequently asked questions encountered by CEEP researchers concerning the three most popular…

  14. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa

    NARCIS (Netherlands)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M; Genetic Consortium for Anorexia Nervosa, Wellcome Trust Case Control Consortium 3

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for

  15. Investigation into impacts of large numbers of visitors on the collection environment at Our Lord in the Attic

    NARCIS (Netherlands)

    Maekawa, S.; Ankersmit, Bart; Neuhaus, E.; Schellen, H.L.; Beltran, V.; Boersma, F.; Padfield, T.; Borchersen, K.

    2007-01-01

    Our Lord in the Attic is a historic house museum located in the historic center of Amsterdam, The Netherlands. It is a typical 17th century Dutch canal house, with a hidden Church in the attic. The Church was used regularly until 1887 when the house became a museum. The annual total number of

  16. A Few Large Roads or Many Small Ones? How to Accommodate Growth in Vehicle Numbers to Minimise Impacts on Wildlife

    Science.gov (United States)

    Rhodes, Jonathan R.; Lunney, Daniel; Callaghan, John; McAlpine, Clive A.

    2014-01-01

    Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity. PMID:24646891

  17. A few large roads or many small ones? How to accommodate growth in vehicle numbers to minimise impacts on wildlife.

    Directory of Open Access Journals (Sweden)

    Jonathan R Rhodes

    Full Text Available Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1 increasing the number of roads, and (2 increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.

  18. A few large roads or many small ones? How to accommodate growth in vehicle numbers to minimise impacts on wildlife.

    Science.gov (United States)

    Rhodes, Jonathan R; Lunney, Daniel; Callaghan, John; McAlpine, Clive A

    2014-01-01

    Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.

  19. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae)

    Czech Academy of Sciences Publication Activity Database

    Krahulcová, Anna; Trávníček, Pavel; Krahulec, František; Rejmánek, M.

    2017-01-01

    Roč. 119, č. 6 (2017), s. 957-964 ISSN 0305-7364 Institutional support: RVO:67985939 Keywords : Aesculus * chromosome number * genome size * phylogeny * seed mass Subject RIV: EF - Botanics OBOR OECD: Plant sciences, botany Impact factor: 4.041, year: 2016

  20. Optimization with Extremal Dynamics

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Percus, Allon G.

    2001-01-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard discrete optimization problems. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. Extremal optimization successively updates extremely undesirable variables of a single suboptimal solution, assigning them new, random values. Large fluctuations ensue, efficiently exploring many local optima. We use extremal optimization to elucidate the phase transition in the 3-coloring problem, and we provide independent confirmation of previously reported extrapolations for the ground-state energy of ±J spin glasses in d=3 and 4

  1. Method Extreme Learning Machine for Forecasting Number of Patients’ Visits in Dental Poli (A Case Study: Community Health Centers Kamal Madura Indonesia)

    Science.gov (United States)

    Sari Rochman, E. M.; Rachmad, A.; Syakur, M. A.; Suzanti, I. O.

    2018-01-01

    Community Health Centers (Puskesmas) are health service institutions that provide individual health services for outpatient, inpatient and emergency care services. In the outpatient service, there are several polyclinics, including the polyclinic of Ear, Nose, and Throat (ENT), Eyes, Dentistry, Children, and internal disease. Dental Poli is a form of dental and oral health services which is directed to the community. At this moment, the management team in dental poli often has difficulties when they do the preparation and planning to serve a number of patients. It is because the dental poli does not have the appropriate workers with the right qualification. The purpose of this study is to make the system of forecasting the patient’s visit to predict how many patients will come; so that the resources that have been provided will be in accordance with the needs of the Puskesmas. In the ELM method, input and bias weights are initially determined randomly to obtain final weights using Generalized Invers. The matrix used in the final weights is a matrix whose outputs are from each input to a hidden layer. So ELM has a fast learning speed. The result of the experiment of ELM method in this research is able to generate a prediction of a number of patient visit with the RMSE value which is equal to 0.0426.

  2. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  3. Hungarian Marfan family with large FBN1 deletion calls attention to copy number variation detection in the current NGS era

    Science.gov (United States)

    Ágg, Bence; Meienberg, Janine; Kopps, Anna M.; Fattorini, Nathalie; Stengl, Roland; Daradics, Noémi; Pólos, Miklós; Bors, András; Radovits, Tamás; Merkely, Béla; De Backer, Julie; Szabolcs, Zoltán; Mátyás, Gábor

    2018-01-01

    Copy number variations (CNVs) comprise about 10% of reported disease-causing mutations in Mendelian disorders. Nevertheless, pathogenic CNVs may have been under-detected due to the lack or insufficient use of appropriate detection methods. In this report, on the example of the diagnostic odyssey of a patient with Marfan syndrome (MFS) harboring a hitherto unreported 32-kb FBN1 deletion, we highlight the need for and the feasibility of testing for CNVs (>1 kb) in Mendelian disorders in the current next-generation sequencing (NGS) era. PMID:29850152

  4. Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection at large Rayleigh numbers

    Science.gov (United States)

    Kozitskiy, Sergey

    2018-05-01

    Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection has been performed by using the previously derived system of complex Ginzburg-Landau type amplitude equations, valid in a neighborhood of Hopf bifurcation points. Simulation has shown that the state of spatiotemporal chaos develops in the system. It has the form of nonstationary structures that depend on the parameters of the system. The shape of structures does not depend on the initial conditions, and a limited number of spectral components participate in their formation.

  5. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  6. Historical and projected trends in temperature and precipitation extremes in Australia in observations and CMIP5

    OpenAIRE

    Alexander, Lisa V.; Arblaster, Julie M.

    2017-01-01

    This study expands previous work on climate extremes in Australia by investigating the simulation of a large number of extremes indices in the CMIP5 multi-model dataset and comparing them to multiple observational datasets over a century of observed data using consistent methods. We calculate 24 indices representing extremes of temperature and precipitation from 1911 to 2010 over Australia and show that there have been significant observed trends in temperature extremes associated with warmin...

  7. TO BE OR NOT TO BE: AN INFORMATIVE NON-SYMBOLIC NUMERICAL MAGNITUDE PROCESSING STUDY ABOUT SMALL VERSUS LARGE NUMBERS IN INFANTS

    Directory of Open Access Journals (Sweden)

    Annelies CEULEMANS

    2014-03-01

    Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.

  8. Method for the radioimmunoassay of large numbers of samples using quantitative autoradiography of multiple-well plates

    International Nuclear Information System (INIS)

    Luner, S.J.

    1978-01-01

    A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day

  9. How to implement a quantum algorithm on a large number of qubits by controlling one central qubit

    Science.gov (United States)

    Zagoskin, Alexander; Ashhab, Sahel; Johansson, J. R.; Nori, Franco

    2010-03-01

    It is desirable to minimize the number of control parameters needed to perform a quantum algorithm. We show that, under certain conditions, an entire quantum algorithm can be efficiently implemented by controlling a single central qubit in a quantum computer. We also show that the different system parameters do not need to be designed accurately during fabrication. They can be determined through the response of the central qubit to external driving. Our proposal is well suited for hybrid architectures that combine microscopic and macroscopic qubits. More details can be found in: A.M. Zagoskin, S. Ashhab, J.R. Johansson, F. Nori, Quantum two-level systems in Josephson junctions as naturally formed qubits, Phys. Rev. Lett. 97, 077001 (2006); and S. Ashhab, J.R. Johansson, F. Nori, Rabi oscillations in a qubit coupled to a quantum two-level system, New J. Phys. 8, 103 (2006).

  10. DISCOVERY OF A LARGE NUMBER OF CANDIDATE PROTOCLUSTERS TRACED BY ∼15 Mpc-SCALE GALAXY OVERDENSITIES IN COSMOS

    International Nuclear Information System (INIS)

    Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik

    2014-01-01

    To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62 deg 2 COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ∼15 comoving Mpc-scale galaxy clustering is consistent with ΛCDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M z = 0 ∼ 10 15 M ☉ clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M z= 0 ∼ 10 14 M ☉ are ∼70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey

  11. Instability and associated roll structure of Marangoni convection in high Prandtl number liquid bridge with large aspect ratio

    Science.gov (United States)

    Yano, T.; Nishino, K.; Kawamura, H.; Ueno, I.; Matsumoto, S.

    2015-02-01

    This paper reports the experimental results on the instability and associated roll structures (RSs) of Marangoni convection in liquid bridges formed under the microgravity environment on the International Space Station. The geometry of interest is high aspect ratio (AR = height/diameter ≥ 1.0) liquid bridges of high Prandtl number fluids (Pr = 67 and 207) suspended between coaxial disks heated differentially. The unsteady flow field and associated RSs were revealed with the three-dimensional particle tracking velocimetry. It is found that the flow field after the onset of instability exhibits oscillations with azimuthal mode number m = 1 and associated RSs traveling in the axial direction. The RSs travel in the same direction as the surface flow (co-flow direction) for 1.00 ≤ AR ≤ 1.25 while they travel in the opposite direction (counter-flow direction) for AR ≥ 1.50, thus showing the change of traveling directions with AR. This traveling direction for AR ≥ 1.50 is reversed to the co-flow direction when the temperature difference between the disks is increased to the condition far beyond the critical one. This change of traveling directions is accompanied by the increase of the oscillation frequency. The characteristics of the RSs for AR ≥ 1.50, such as the azimuthal mode of oscillation, the dimensionless oscillation frequency, and the traveling direction, are in reasonable agreement with those of the previous sounding rocket experiment for AR = 2.50 and those of the linear stability analysis of an infinite liquid bridge.

  12. A LARGE NUMBER OF z > 6 GALAXIES AROUND A QSO AT z = 6.43: EVIDENCE FOR A PROTOCLUSTER?

    International Nuclear Information System (INIS)

    Utsumi, Yousuke; Kashikawa, Nobunari; Miyazaki, Satoshi; Komiyama, Yutaka; Goto, Tomotsugu; Furusawa, Hisanori; Overzier, Roderik

    2010-01-01

    QSOs have been thought to be important for tracing highly biased regions in the early universe, from which the present-day massive galaxies and galaxy clusters formed. While overdensities of star-forming galaxies have been found around QSOs at 2 6 is less clear. Previous studies with the Hubble Space Telescope (HST) have reported the detection of small excesses of faint dropout galaxies in some QSO fields, but these surveys probed a relatively small region surrounding the QSOs. To overcome this problem, we have observed the most distant QSO at z = 6.4 using the large field of view of the Suprime-Cam (34' x 27'). Newly installed red-sensitive fully depleted CCDs allowed us to select Lyman break galaxies (LBGs) at z ∼ 6.4 more efficiently. We found seven LBGs in the QSO field, whereas only one exists in a comparison field. The significance of this apparent excess is difficult to quantify without spectroscopic confirmation and additional control fields. The Poisson probability to find seven objects when one expects four is ∼10%, while the probability to find seven objects in one field and only one in the other is less than 0.4%, suggesting that the QSO field is significantly overdense relative to the control field. These conclusions are supported by a comparison with a cosmological smoothed particle hydrodynamics simulation which includes the higher order clustering of galaxies. We find some evidence that the LBGs are distributed in a ring-like shape centered on the QSO with a radius of ∼3 Mpc. There are no candidate LBGs within 2 Mpc from the QSO, i.e., galaxies are clustered around the QSO but appear to avoid the very center. These results suggest that the QSO is embedded in an overdense region when defined on a sufficiently large scale (i.e., larger than an HST/ACS pointing). This suggests that the QSO was indeed born in a massive halo. The central deficit of galaxies may indicate that (1) the strong UV radiation from the QSO suppressed galaxy formation in

  13. A dynamic response model for pressure sensors in continuum and high Knudsen number flows with large temperature gradients

    Science.gov (United States)

    Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.

    1996-01-01

    This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.

  14. Effect of the Hartmann number on phase separation controlled by magnetic field for binary mixture system with large component ratio

    Science.gov (United States)

    Heping, Wang; Xiaoguang, Li; Duyang, Zang; Rui, Hu; Xingguo, Geng

    2017-11-01

    This paper presents an exploration for phase separation in a magnetic field using a coupled lattice Boltzmann method (LBM) with magnetohydrodynamics (MHD). The left vertical wall was kept at a constant magnetic field. Simulations were conducted by the strong magnetic field to enhance phase separation and increase the size of separated phases. The focus was on the effect of magnetic intensity by defining the Hartmann number (Ha) on the phase separation properties. The numerical investigation was carried out for different governing parameters, namely Ha and the component ratio of the mixed liquid. The effective morphological evolutions of phase separation in different magnetic fields were demonstrated. The patterns showed that the slant elliptical phases were created by increasing Ha, due to the formation and increase of magnetic torque and force. The dataset was rearranged for growth kinetics of magnetic phase separation in a plot by spherically averaged structure factor and the ratio of separated phases and total system. The results indicate that the increase in Ha can increase the average size of separated phases and accelerate the spinodal decomposition and domain growth stages. Specially for the larger component ratio of mixed phases, the separation degree was also significantly improved by increasing magnetic intensity. These numerical results provide guidance for setting the optimum condition for the phase separation induced by magnetic field.

  15. Technology interactions among low-carbon energy technologies: What can we learn from a large number of scenarios?

    International Nuclear Information System (INIS)

    McJeon, Haewon C.; Clarke, Leon; Kyle, Page; Wise, Marshall; Hackbarth, Andrew; Bryant, Benjamin P.; Lempert, Robert J.

    2011-01-01

    Advanced low-carbon energy technologies can substantially reduce the cost of stabilizing atmospheric carbon dioxide concentrations. Understanding the interactions between these technologies and their impact on the costs of stabilization can help inform energy policy decisions. Many previous studies have addressed this challenge by exploring a small number of representative scenarios that represent particular combinations of future technology developments. This paper uses a combinatorial approach in which scenarios are created for all combinations of the technology development assumptions that underlie a smaller, representative set of scenarios. We estimate stabilization costs for 768 runs of the Global Change Assessment Model (GCAM), based on 384 different combinations of assumptions about the future performance of technologies and two stabilization goals. Graphical depiction of the distribution of stabilization costs provides first-order insights about the full data set and individual technologies. We apply a formal scenario discovery method to obtain more nuanced insights about the combinations of technology assumptions most strongly associated with high-cost outcomes. Many of the fundamental insights from traditional representative scenario analysis still hold under this comprehensive combinatorial analysis. For example, the importance of carbon capture and storage (CCS) and the substitution effect among supply technologies are consistently demonstrated. The results also provide more clarity regarding insights not easily demonstrated through representative scenario analysis. For example, they show more clearly how certain supply technologies can provide a hedge against high stabilization costs, and that aggregate end-use efficiency improvements deliver relatively consistent stabilization cost reductions. Furthermore, the results indicate that a lack of CCS options combined with lower technological advances in the buildings sector or the transportation sector is

  16. Combined large field-of-view MRA and time-resolved MRA of the lower extremities: Impact of acquisition order on image quality

    International Nuclear Information System (INIS)

    Riffel, Philipp; Haneder, Stefan; Attenberger, Ulrike I.; Brade, Joachim; Schoenberg, Stefan O.; Michaely, Henrik J.

    2012-01-01

    Purpose: Different approaches exist for hybrid MRA of the calf station. So far, the order of the acquisition of the focused calf MRA and the large field-of-view MRA has not been scientifically evaluated. Therefore the aim of this study was to evaluate if the quality of the combined large field-of-view MRA (CTM MR angiography) and time-resolved MRA with stochastic interleaved trajectories (TWIST MRA) depends on the order of acquisition of the two contrast-enhanced studies. Methods: In this retrospective study, 40 consecutive patients (mean age 68.1 ± 8.7 years, 29 male/11 female) who had undergone an MR angiographic protocol that consisted of CTM-MRA (TR/TE, 2.4/1.0 ms; 21° flip angle; isotropic resolution 1.2 mm; gadolinium dose, 0.07 mmol/kg) and TWIST-MRA (TR/TE 2.8/1.1; 20° flip angle; isotropic resolution 1.1 mm; temporal resolution 5.5 s, gadolinium dose, 0.03 mmol/kg), were included. In the first group (group 1) TWIST-MRA of the calf station was performed 1–2 min after CTM-MRA. In the second group (group 2) CTM-MRA was performed 1–2 min after TWIST-MRA of the calf station. The image quality of CTM-MRA and TWIST-MRA were evaluated by 2 two independent radiologists in consensus according to a 4-point Likert-like rating scale assessing overall image quality on a segmental basis. Venous overlay was assessed per examination. Results: In the CTM-MRA, 1360 segments were included in the assessment of image quality. CTM-MRA was diagnostic in 95% (1289/1360) of segments. There was a significant difference (p < 0.0001) between both groups with regard to the number of segments rated as excellent and moderate. The image quality was rated as excellent in group 1 in 80% (514/640 segments) and in group 2 in 67% (432/649), respectively (p < 0.0001). In contrast, the image quality was rated as moderate in the first group in 5% (33/640) and in the second group in 19% (121/649) respectively (p < 0.0001). The venous overlay was disturbing in 10% in group 1 and 20% in group

  17. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    Science.gov (United States)

    2014-01-01

    Background Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The mechanisms underlying these non-recurrent copy number changes have not yet been fully elucidated. Results We analyze large NF1 deletions with non-recurrent breakpoints as a model to investigate the full spectrum of causative mechanisms, and observe that they are mediated by various DNA double strand break repair mechanisms, as well as aberrant replication. Further, two of the 17 NF1 deletions with non-recurrent breakpoints, identified in unrelated patients, occur in association with the concomitant insertion of SINE/variable number of tandem repeats/Alu (SVA) retrotransposons at the deletion breakpoints. The respective breakpoints are refractory to analysis by standard breakpoint-spanning PCRs and are only identified by means of optimized PCR protocols designed to amplify across GC-rich sequences. The SVA elements are integrated within SUZ12P intron 8 in both patients, and were mediated by target-primed reverse transcription of SVA mRNA intermediates derived from retrotranspositionally active source elements. Both SVA insertions occurred during early postzygotic development and are uniquely associated with large deletions of 1 Mb and 867 kb, respectively, at the insertion sites. Conclusions Since active SVA elements are abundant in the human genome and the retrotranspositional activity of many SVA source elements is high, SVA insertion-associated large genomic deletions encompassing many hundreds of kilobases could constitute a novel and as yet under-appreciated mechanism underlying large-scale copy number changes in the human genome. PMID:24958239

  18. Research of large-amplitude waves evolution in the framework of shallow water equations and their implication for people's safety in extreme situations

    Science.gov (United States)

    Pelinovsky, Efim; Chaikovskaia, Natalya; Rodin, Artem

    2015-04-01

    The paper presents the analysis of the formation and evolution of shock wave in shallow water with no restrictions on its amplitude in the framework of the nonlinear shallow water equations. It is shown that in the case of large-amplitude waves appears a new nonlinear effect of reflection from the shock front of incident wave. These results are important for the assessment of coastal flooding by tsunami waves and storm surges. Very often the largest number of victims was observed on the coastline where the wave moved breaking. Many people, instead of running away, were just looking at the movement of the "raging wall" and lost time. This fact highlights the importance of researching the problem of security and optimal behavior of people in situations with increased risk. Usually there is uncertainty about the exact time, when rogue waves will impact. This fact limits the ability of people to adjust their behavior psychologically to the stressful situations. It concerns specialists, who are busy both in the field of flying activity and marine service as well as adults, young people and children, who live on the coastal zone. The rogue wave research is very important and it demands cooperation of different scientists - mathematicians and physicists, as well as sociologists and psychologists, because the final goal of efforts of all scientists is minimization of the harm, brought by rogue waves to humanity.

  19. Extreme cosmos

    CERN Document Server

    Gaensler, Bryan

    2011-01-01

    The universe is all about extremes. Space has a temperature 270°C below freezing. Stars die in catastrophic supernova explosions a billion times brighter than the Sun. A black hole can generate 10 million trillion volts of electricity. And hypergiants are stars 2 billion kilometres across, larger than the orbit of Jupiter. Extreme Cosmos provides a stunning new view of the way the Universe works, seen through the lens of extremes: the fastest, hottest, heaviest, brightest, oldest, densest and even the loudest. This is an astronomy book that not only offers amazing facts and figures but also re

  20. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    International Nuclear Information System (INIS)

    Hasegawa, K.; Lim, C.S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario

  1. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    Science.gov (United States)

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-09-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  2. Escape from washing out of baryon number in a two-zero-texture general Zee model compatible with the large mixing angle MSW solution

    OpenAIRE

    Hasegawa, K.; Lim, C. S.; Ogure, K.

    2003-01-01

    We propose a two-zero-texture general Zee model, compatible with the large mixing angle Mikheyev-Smirnov-Wolfenstein solution. The washing out of the baryon number does not occur in this model for an adequate parameter range. We check the consistency of a model with the constraints coming from flavor changing neutral current processes, the recent cosmic microwave background observation, and the Z-burst scenario.

  3. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    Science.gov (United States)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  4. How much can the number of jabiru stork (Ciconiidae nests vary due to change of flood extension in a large Neotropical floodplain?

    Directory of Open Access Journals (Sweden)

    Guilherme Mourão

    2010-10-01

    Full Text Available The jabiru stork, Jabiru mycteria (Lichtenstein, 1819, a large, long-legged wading bird occurring in lowland wetlands from southern Mexico to northern Argentina, is considered endangered in a large portion of its distribution range. We conducted aerial surveys to estimate the number of jabiru active nests in the Brazilian Pantanal (140,000 km² in September of 1991-1993, 1998, 2000-2002, and 2004. Corrected densities of active nests were regressed against the annual hydrologic index (AHI, an index of flood extension in the Pantanal based on the water level of the Paraguay River. Annual nest density was a non-linear function of the AHI, modeled by the equation 6.5 · 10-8 · AHI1.99 (corrected r² = 0.72, n = 7. We applied this model to the AHI between 1900 and 2004. The results indicate that the number of jabiru nests may have varied from about 220 in 1971 to more than 23,000 in the nesting season of 1921, and the estimates for our study period (1991 to 2004 averaged about 12,400 nests. Our model indicates that the inter-annual variations in flooding extent can determine dramatic changes in the number of active jabiru nests. Since the jabiru stork responds negatively to drier conditions in the Pantanal, direct human-induced changes in the hydrological patterns, as well as the effects of global climate change, may strongly jeopardize the population in the region.

  5. The waviness of the extratropical jet and daily weather extremes

    Science.gov (United States)

    Röthlisberger, Matthias; Martius, Olivia; Pfahl, Stephan

    2016-04-01

    In recent years the Northern Hemisphere mid-latitudes have experienced a large number of weather extremes with substantial socio-economic impact, such as the European and Russian heat waves in 2003 and 2010, severe winter floods in the United Kingdom in 2013/2014 and devastating winter storms such as Lothar (1999) and Xynthia (2010) in Central Europe. These have triggered an engaged debate within the scientific community on the role of human induced climate change in the occurrence of such extremes. A key element of this debate is the hypothesis that the waviness of the extratropical jet is linked to the occurrence of weather extremes, with a wavier jet stream favouring more extremes. Previous work on this topic is expanded in this study by analyzing the linkage between a regional measure of jet waviness and daily temperature, precipitation and wind gust extremes. We show that indeed such a linkage exists in many regions of the world, however this waviness-extremes linkage varies spatially in strength and sign. Locally, it is strong only where the relevant weather systems, in which the extremes occur, are affected by the jet waviness. Its sign depends on how the frequency of occurrence of the relevant weather systems is correlated with the occurrence of high and low jet waviness. These results go beyond previous studies by noting that also a decrease in waviness could be associated with an enhanced number of some weather extremes, especially wind gust and precipitation extremes over western Europe.

  6. Multiple-relaxation-time lattice Boltzmann model for incompressible miscible flow with large viscosity ratio and high Péclet number

    Science.gov (United States)

    Meng, Xuhui; Guo, Zhaoli

    2015-10-01

    A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.

  7. Influence of Extrinsic Information Scaling Coefficient on Double-Iterative Decoding Algorithm for Space-Time Turbo Codes with Large Number of Antennas

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2011-02-01

    Full Text Available This paper analyzes the extrinsic information scaling coefficient influence on double-iterative decoding algorithm for space-time turbo codes with large number of antennas. The max-log-APP algorithm is used, scaling both the extrinsic information in the turbo decoder and the one used at the input of the interference-canceling block. Scaling coefficients of 0.7 or 0.75 lead to a 0.5 dB coding gain compared to the no-scaling case, for one or more iterations to cancel the spatial interferences.

  8. Dogs Have the Most Neurons, Though Not the Largest Brain: Trade-Off between Body Mass and Number of Neurons in the Cerebral Cortex of Large Carnivoran Species

    Directory of Open Access Journals (Sweden)

    Débora Jardim-Messeder

    2017-12-01

    Full Text Available Carnivorans are a diverse group of mammals that includes carnivorous, omnivorous and herbivorous, domesticated and wild species, with a large range of brain sizes. Carnivory is one of several factors expected to be cognitively demanding for carnivorans due to a requirement to outsmart larger prey. On the other hand, large carnivoran species have high hunting costs and unreliable feeding patterns, which, given the high metabolic cost of brain neurons, might put them at risk of metabolic constraints regarding how many brain neurons they can afford, especially in the cerebral cortex. For a given cortical size, do carnivoran species have more cortical neurons than the herbivorous species they prey upon? We find they do not; carnivorans (cat, mongoose, dog, hyena, lion share with non-primates, including artiodactyls (the typical prey of large carnivorans, roughly the same relationship between cortical mass and number of neurons, which suggests that carnivorans are subject to the same evolutionary scaling rules as other non-primate clades. However, there are a few important exceptions. Carnivorans stand out in that the usual relationship between larger body, larger cortical mass and larger number of cortical neurons only applies to small and medium-sized species, and not beyond dogs: we find that the golden retriever dog has more cortical neurons than the striped hyena, African lion and even brown bear, even though the latter species have up to three times larger cortices than dogs. Remarkably, the brown bear cerebral cortex, the largest examined, only has as many neurons as the ten times smaller cat cerebral cortex, although it does have the expected ten times as many non-neuronal cells in the cerebral cortex compared to the cat. We also find that raccoons have dog-like numbers of neurons in their cat-sized brain, which makes them comparable to primates in neuronal density. Comparison of domestic and wild species suggests that the neuronal

  9. Enhancement of phase space density by increasing trap anisotropy in a magneto-optical trap with a large number of atoms

    International Nuclear Information System (INIS)

    Vengalattore, M.; Conroy, R.S.; Prentiss, M.G.

    2004-01-01

    The phase space density of dense, cylindrical clouds of atoms in a 2D magneto-optic trap is investigated. For a large number of trapped atoms (>10 8 ), the density of a spherical cloud is limited by photon reabsorption. However, as the atom cloud is deformed to reduce the radial optical density, the temperature of the atoms decreases due to the suppression of multiple scattering leading to an increase in the phase space density. A density of 2x10 -4 has been achieved in a magneto-optic trap containing 2x10 8 atoms

  10. Timoides agassizii Bigelow, 1904, little-known hydromedusa (Cnidaria), appears briefly in large numbers off Oman, March 2011, with additional notes about species of the genus Timoides.

    Science.gov (United States)

    Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al

    2013-12-11

    A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.

  11. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, J; Ma, L [Department of Radiation Oncology, University of California San Francisco School of Medicine, San Francisco, CA (United States)

    2015-06-15

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.

  12. SU-E-T-230: Creating a Large Number of Focused Beams with Variable Patient Head Tilt to Improve Dose Fall-Off for Brain Radiosurgery

    International Nuclear Information System (INIS)

    Chiu, J; Ma, L

    2015-01-01

    Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beam numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume

  13. Global repeat discovery and estimation of genomic copy number in a large, complex genome using a high-throughput 454 sequence survey

    Directory of Open Access Journals (Sweden)

    Varala Kranthi

    2007-05-01

    Full Text Available Abstract Background Extensive computational and database tools are available to mine genomic and genetic databases for model organisms, but little genomic data is available for many species of ecological or agricultural significance, especially those with large genomes. Genome surveys using conventional sequencing techniques are powerful, particularly for detecting sequences present in many copies per genome. However these methods are time-consuming and have potential drawbacks. High throughput 454 sequencing provides an alternative method by which much information can be gained quickly and cheaply from high-coverage surveys of genomic DNA. Results We sequenced 78 million base-pairs of randomly sheared soybean DNA which passed our quality criteria. Computational analysis of the survey sequences provided global information on the abundant repetitive sequences in soybean. The sequence was used to determine the copy number across regions of large genomic clones or contigs and discover higher-order structures within satellite repeats. We have created an annotated, online database of sequences present in multiple copies in the soybean genome. The low bias of pyrosequencing against repeat sequences is demonstrated by the overall composition of the survey data, which matches well with past estimates of repetitive DNA content obtained by DNA re-association kinetics (Cot analysis. Conclusion This approach provides a potential aid to conventional or shotgun genome assembly, by allowing rapid assessment of copy number in any clone or clone-end sequence. In addition, we show that partial sequencing can provide access to partial protein-coding sequences.

  14. Use of Two-Body Correlated Basis Functions with van der Waals Interaction to Study the Shape-Independent Approximation for a Large Number of Trapped Interacting Bosons

    Science.gov (United States)

    Lekala, M. L.; Chakrabarti, B.; Das, T. K.; Rampho, G. J.; Sofianos, S. A.; Adam, R. M.; Haldar, S. K.

    2017-05-01

    We study the ground-state and the low-lying excitations of a trapped Bose gas in an isotropic harmonic potential for very small (˜ 3) to very large (˜ 10^7) particle numbers. We use the two-body correlated basis functions and the shape-dependent van der Waals interaction in our many-body calculations. We present an exhaustive study of the effect of inter-atomic correlations and the accuracy of the mean-field equations considering a wide range of particle numbers. We calculate the ground-state energy and the one-body density for different values of the van der Waals parameter C6. We compare our results with those of the modified Gross-Pitaevskii results, the correlated Hartree hypernetted-chain equations (which also utilize the two-body correlated basis functions), as well as of the diffusion Monte Carlo for hard sphere interactions. We observe the effect of the attractive tail of the van der Waals potential in the calculations of the one-body density over the truly repulsive zero-range potential as used in the Gross-Pitaevskii equation and discuss the finite-size effects. We also present the low-lying collective excitations which are well described by a hydrodynamic model in the large particle limit.

  15. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  16. Comparative efficacy of tulathromycin versus a combination of florfenicol-oxytetracycline in the treatment of undifferentiated respiratory disease in large numbers of sheep

    Directory of Open Access Journals (Sweden)

    Mohsen Champour

    2015-09-01

    Full Text Available The objective of this study was to compare the efficacy of tulathromycin (TUL with a combination of florfenicol (FFC and long-acting oxytetracycline (LAOTC in the treatment of naturally occurring undifferentiated respiratory diseases in large numbers of sheep. In this study, seven natural outbreaks of sheep pneumonia in Garmsar, Iran were considered. From these outbreaks, 400 sheep exhibiting the signs of respiratory diseases were selected, and the sheep were randomly divided into two equal groups. The first group was treated with a single injection of TUL (dosed at 2.5 mg/kg body weight, and the second group was treated with concurrent injections of FFC (dosed at 40 mg/kg bwt and LAOTC (dosed at 20 mg/kg bwt. In the first group, 186 (93% sheep were found to be cured 5 days after the injection, and 14 (7% sheep needed further treatment, of which 6 (3% were cured, and 8 (4% died. In the second group, 172 (86% sheep were cured after the injections, but 28 (14% sheep needed further treatment, of which 10 (5% were cured, and 18 (9% died. This study revealed that TUL was more efficacious as compared to the combined treatment using FFC and LAOTC. As the first report, this field trial describes the successful treatment of undifferentiated respiratory diseases in large numbers of sheep. Thus, TUL can be used for the treatment of undifferentiated respiratory diseases in sheep. [J Adv Vet Anim Res 2015; 2(3.000: 279-284

  17. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  18. Lower extremity injury in female basketball players is related to a large difference in peak eversion torque between barefoot and shod conditions

    Directory of Open Access Journals (Sweden)

    Jennifer M. Yentes

    2014-09-01

    Conclusion: It is possible that a large discrepancy between strength in barefoot and shod conditions can predispose an athlete to injury. Narrowing the difference in peak eversion torque between barefoot and shod could decrease propensity to injury. Future work should investigate the effect of restoration of muscular strength during barefoot and shod exercise on injury rates.

  19. Long-term changes in nutrients and mussel stocks are related to numbers of breeding eiders Somateria mollissima at a large Baltic colony.

    Directory of Open Access Journals (Sweden)

    Karsten Laursen

    Full Text Available BACKGROUND: The Baltic/Wadden Sea eider Somateria mollissima flyway population is decreasing, and this trend is also reflected in the large eider colony at Christiansø situated in the Baltic Sea. This colony showed a 15-fold increase from 1925 until the mid-1990's, followed by a rapid decline in recent years, although the causes of this trend remain unknown. Most birds from the colony winter in the Wadden Sea, from which environmental data and information on the size of the main diet, the mussel Mytilus edulis stock exists. We hypothesised that changes in nutrients and water temperature in the Wadden Sea had an effect on the ecosystem affecting the size of mussel stocks, the principal food item for eiders, thereby influencing the number of breeding eider in the Christiansø colony. METHODOLOGY/PRINCIPAL FINDING: A positive relationship between the amount of fertilizer used by farmers and the concentration of phosphorus in the Wadden Sea (with a time lag of one year allowed analysis of the predictions concerning effects of nutrients for the period 1925-2010. There was (1 increasing amounts of fertilizer used in agriculture and this increased the amount of nutrients in the marine environment thereby increasing the mussel stocks in the Wadden Sea. (2 The number of eiders at Christiansø increased when the amount of fertilizer increased. Finally (3 the number of eiders in the colony at Christiansø increased with the amount of mussel stocks in the Wadden Sea. CONCLUSIONS/SIGNIFICANCE: The trend in the number of eiders at Christiansø is representative for the entire flyway population, and since nutrient reduction in the marine environment occurs in most parts of Northwest Europe, we hypothesize that this environmental candidate parameter is involved in the overall regulation of the Baltic/Wadden Sea eider population during recent decades.

  20. AN EXTREME ANALOGUE OF ϵ AURIGAE: AN M-GIANT ECLIPSED EVERY 69 YEARS BY A LARGE OPAQUE DISK SURROUNDING A SMALL HOT SOURCE

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, Joseph E.; Stassun, Keivan G.; Lund, Michael B.; Conroy, Kyle E. [Department of Physics and Astronomy, Vanderbilt University, 6301 Stevenson Center, Nashville, TN 37235 (United States); Siverd, Robert J. [Las Cumbres Observatory Global Telescope Network, 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Pepper, Joshua [Department of Physics, Lehigh University, 16 Memorial Drive East, Bethlehem, PA 18015 (United States); Tang, Sumin [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Kafka, Stella [American Association of Variable Star Observers, 49 Bay State Road, Cambridge, MA 02138 (United States); Gaudi, B. Scott; Stevens, Daniel J.; Kochanek, Christopher S. [Department of Astronomy, The Ohio State University, Columbus, OH 43210 (United States); Beatty, Thomas G. [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States); Shappee, Benjamin J. [Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101 (United States)

    2016-05-01

    status. In any case, this system is poised to become an exemplar of a very rare class of systems, even more extreme in several respects than the well studied archetype ϵ Aurigae.

  1. A large scale survey reveals that chromosomal copy-number alterations significantly affect gene modules involved in cancer initiation and progression

    Directory of Open Access Journals (Sweden)

    Cigudosa Juan C

    2011-05-01

    Full Text Available Abstract Background Recent observations point towards the existence of a large number of neighborhoods composed of functionally-related gene modules that lie together in the genome. This local component in the distribution of the functionality across chromosomes is probably affecting the own chromosomal architecture by limiting the possibilities in which genes can be arranged and distributed across the genome. As a direct consequence of this fact it is therefore presumable that diseases such as cancer, harboring DNA copy number alterations (CNAs, will have a symptomatology strongly dependent on modules of functionally-related genes rather than on a unique "important" gene. Methods We carried out a systematic analysis of more than 140,000 observations of CNAs in cancers and searched by enrichments in gene functional modules associated to high frequencies of loss or gains. Results The analysis of CNAs in cancers clearly demonstrates the existence of a significant pattern of loss of gene modules functionally related to cancer initiation and progression along with the amplification of modules of genes related to unspecific defense against xenobiotics (probably chemotherapeutical agents. With the extension of this analysis to an Array-CGH dataset (glioblastomas from The Cancer Genome Atlas we demonstrate the validity of this approach to investigate the functional impact of CNAs. Conclusions The presented results indicate promising clinical and therapeutic implications. Our findings also directly point out to the necessity of adopting a function-centric, rather a gene-centric, view in the understanding of phenotypes or diseases harboring CNAs.

  2. Three-Dimensional Interaction of a Large Number of Dense DEP Particles on a Plane Perpendicular to an AC Electrical Field

    Directory of Open Access Journals (Sweden)

    Chuanchuan Xie

    2017-01-01

    Full Text Available The interaction of dielectrophoresis (DEP particles in an electric field has been observed in many experiments, known as the “particle chains phenomenon”. However, the study in 3D models (spherical particles is rarely reported due to its complexity and significant computational cost. In this paper, we employed the iterative dipole moment (IDM method to study the 3D interaction of a large number of dense DEP particles randomly distributed on a plane perpendicular to a uniform alternating current (AC electric field in a bounded or unbounded space. The numerical results indicated that the particles cannot move out of the initial plane. The similar particles (either all positive or all negative DEP particles always repelled each other, and did not form a chain. The dissimilar particles (a mixture of positive and negative DEP particles always attracted each other, and formed particle chains consisting of alternately arranged positive and negative DEP particles. The particle chain patterns can be randomly multitudinous depending on the initial particle distribution, the electric properties of particles/fluid, the particle sizes and the number of particles. It is also found that the particle chain patterns can be effectively manipulated via tuning the frequency of the AC field and an almost uniform distribution of particles in a bounded plane chip can be achieved when all of the particles are similar, which may have potential applications in the particle manipulation of microfluidics.

  3. CD3+/CD16+CD56+ cell numbers in peripheral blood are correlated with higher tumor burden in patients with diffuse large B-cell lymphoma

    Directory of Open Access Journals (Sweden)

    Anna Twardosz

    2011-04-01

    Full Text Available Diffuse large B-cell lymphoma is the commonest histological type of malignant lymphoma, andremains incurable in many cases. Developing more efficient immunotherapy strategies will require betterunderstanding of the disorders of immune responses in cancer patients. NKT (natural killer-like T cells wereoriginally described as a unique population of T cells with the co-expression of NK cell markers. Apart fromtheir role in protecting against microbial pathogens and controlling autoimmune diseases, NKT cells havebeen recently revealed as one of the key players in the immune responses against tumors. The objective of thisstudy was to evaluate the frequency of CD3+/CD16+CD56+ cells in the peripheral blood of 28 diffuse largeB-cell lymphoma (DLBCL patients in correlation with clinical and laboratory parameters. Median percentagesof CD3+/CD16+CD56+ were significantly lower in patients with DLBCL compared to healthy donors(7.37% vs. 9.01%, p = 0.01; 4.60% vs. 5.81%, p = 0.03, although there were no differences in absolute counts.The frequency and the absolute numbers of CD3+/CD16+CD56+ cells were lower in advanced clinical stagesthan in earlier ones. The median percentage of CD3+/CD16+CD56+ cells in patients in Ann Arbor stages 1–2 was5.55% vs. 3.15% in stages 3–4 (p = 0.02, with median absolute counts respectively 0.26 G/L vs. 0.41 G/L (p == 0.02. The percentage and absolute numbers of CD3+/CD16+CD56+ cells were significantly higher in DL-BCL patients without B-symptoms compared to the patients with B-symptoms, (5.51% vs. 2.46%, p = 0.04;0.21 G/L vs. 0.44 G/L, p = 0.04. The percentage of CD3+/CD16+CD56+ cells correlated adversely with serumlactate dehydrogenase (R= –445; p < 0.05 which might influence NKT count. These figures suggest a relationshipbetween higher tumor burden and more aggressive disease and decreased NKT numbers. But it remains tobe explained whether low NKT cell counts in the peripheral blood of patients with DLBCL are the result

  4. EXTREMELY LARGE AND HOT MULTILAYER KEPLERIAN DISK AROUND THE O-TYPE PROTOSTAR W51N: THE PRECURSORS OF THE HCH II REGIONS?

    International Nuclear Information System (INIS)

    Zapata, Luis A.; Tang, Ya-Wen; Leurini, Silvia

    2010-01-01

    We present sensitive high angular resolution (0.''57-0.''78) SO, SO 2 , CO, C 2 H 5 OH, HC 3 N, and HCOCH 2 OH line observations at millimeter and submillimeter wavelengths of the young O-type protostar W51 North made with the Submillimeter Array. We report the presence of a large (about 8000 AU) and hot molecular circumstellar disk around this object, which connects the inner dusty disk with the molecular ring or toroid reported recently and confirms the existence of a single bipolar outflow emanating from this object. The molecular emission from the large disk is observed in layers with the transitions characterized by high excitation temperatures in their lower energy states (up to 1512 K) being concentrated closer to the central massive protostar. The molecular emission from those transitions with low or moderate excitation temperatures is found in the outermost parts of the disk and exhibits an inner cavity with an angular size of around 0.''7. We modeled all lines with a local thermodynamic equilibrium (LTE) synthetic spectrum. A detailed study of the kinematics of the molecular gas together with an LTE model of a circumstellar disk shows that the innermost parts of the disk are also Keplerian plus a contracting velocity. The emission of the HCOCH 2 OH reveals the possible presence of a warm 'companion' located to the northeast of the disk, however its nature is unclear. The emission of the SO and SO 2 is observed in the circumstellar disk as well as in the outflow. We suggest that the massive protostar W51 North appears to be in a phase before the presence of a hypercompact or an ultracompact H II (HC/UCH II) region and propose a possible sequence on the formation of the massive stars.

  5. Extreme air pollution events in Hokkaido, Japan, traced back to early snowmelt and large-scale wildfires over East Eurasia: Case studies.

    Science.gov (United States)

    Yasunari, Teppei J; Kim, Kyu-Myong; da Silva, Arlindo M; Hayasaki, Masamitsu; Akiyama, Masayuki; Murao, Naoto

    2018-04-25

    To identify the unusual climate conditions and their connections to air pollutions in a remote area due to wildfires, we examine three anomalous large-scale wildfires in May 2003, April 2008, and July 2014 over East Eurasia, as well as how products of those wildfires reached an urban city, Sapporo, in the northern part of Japan (Hokkaido), significantly affecting the air quality. NASA's MERRA-2 (the Modern-Era Retrospective analysis for Research and Applications, Version 2) aerosol re-analysis data closely reproduced the PM 2.5 variations in Sapporo for the case of smoke arrival in July 2014. Results show that all three cases featured unusually early snowmelt in East Eurasia, accompanied by warmer and drier surface conditions in the months leading to the fires, inducing long-lasting soil dryness and producing climate and environmental conditions conducive to active wildfires. Due to prevailing anomalous synoptic-scale atmospheric motions, smoke from those fires eventually reached a remote area, Hokkaido, and worsened the air quality in Sapporo. In future studies, continuous monitoring of the timing of Eurasian snowmelt and the air quality from the source regions to remote regions, coupled with the analysis of atmospheric and surface conditions, may be essential in more accurately predicting the effects of wildfires on air quality.

  6. Quiescent Galaxies in the 3D-HST Survey: Spectroscopic Confirmation of a Large Number of Galaxies with Relatively Old Stellar Populations at z ~ 2

    Science.gov (United States)

    Whitaker, Katherine E.; van Dokkum, Pieter G.; Brammer, Gabriel; Momcheva, Ivelina G.; Skelton, Rosalind; Franx, Marijn; Kriek, Mariska; Labbé, Ivo; Fumagalli, Mattia; Lundgren, Britt F.; Nelson, Erica J.; Patel, Shannon G.; Rix, Hans-Walter

    2013-06-01

    Quiescent galaxies at z ~ 2 have been identified in large numbers based on rest-frame colors, but only a small number of these galaxies have been spectroscopically confirmed to show that their rest-frame optical spectra show either strong Balmer or metal absorption lines. Here, we median stack the rest-frame optical spectra for 171 photometrically quiescent galaxies at 1.4 < z < 2.2 from the 3D-HST grism survey. In addition to Hβ (λ4861 Å), we unambiguously identify metal absorption lines in the stacked spectrum, including the G band (λ4304 Å), Mg I (λ5175 Å), and Na I (λ5894 Å). This finding demonstrates that galaxies with relatively old stellar populations already existed when the universe was ~3 Gyr old, and that rest-frame color selection techniques can efficiently select them. We find an average age of 1.3^{+0.1}_{-0.3} Gyr when fitting a simple stellar population to the entire stack. We confirm our previous result from medium-band photometry that the stellar age varies with the colors of quiescent galaxies: the reddest 80% of galaxies are dominated by metal lines and have a relatively old mean age of 1.6^{+0.5}_{-0.4} Gyr, whereas the bluest (and brightest) galaxies have strong Balmer lines and a spectroscopic age of 0.9^{+0.2}_{-0.1} Gyr. Although the spectrum is dominated by an evolved stellar population, we also find [O III] and Hβ emission. Interestingly, this emission is more centrally concentrated than the continuum with {L_{{O}\\,\\scriptsize{III}}}=1.7+/- 0.3\\times 10^{40} erg s-1, indicating residual central star formation or nuclear activity.

  7. Investigating NARCCAP Precipitation Extremes via Bivariate Extreme Value Theory (Invited)

    Science.gov (United States)

    Weller, G. B.; Cooley, D. S.; Sain, S. R.; Bukovsky, M. S.; Mearns, L. O.

    2013-12-01

    We introduce methodology from statistical extreme value theory to examine the ability of reanalysis-drive regional climate models to simulate past daily precipitation extremes. Going beyond a comparison of summary statistics such as 20-year return values, we study whether the most extreme precipitation events produced by climate model simulations exhibit correspondence to the most extreme events seen in observational records. The extent of this correspondence is formulated via the statistical concept of tail dependence. We examine several case studies of extreme precipitation events simulated by the six models of the North American Regional Climate Change Assessment Program (NARCCAP) driven by NCEP reanalysis. It is found that the NARCCAP models generally reproduce daily winter precipitation extremes along the Pacific coast quite well; in contrast, simulation of past daily summer precipitation extremes in a central US region is poor. Some differences in the strength of extremal correspondence are seen in the central region between models which employ spectral nudging and those which do not. We demonstrate how these techniques may be used to draw a link between extreme precipitation events and large-scale atmospheric drivers, as well as to downscale extreme precipitation simulated by a future run of a regional climate model. Specifically, we examine potential future changes in the nature of extreme precipitation along the Pacific coast produced by the pineapple express (PE) phenomenon. A link between extreme precipitation events and a "PE Index" derived from North Pacific sea-surface pressure fields is found. This link is used to study PE-influenced extreme precipitation produced by a future-scenario climate model run.

  8. The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples.

    Science.gov (United States)

    Lind, Mads V; Savolainen, Otto I; Ross, Alastair B

    2016-08-01

    Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.

  9. The application of the central limit theorem and the law of large numbers to facial soft tissue depths: T-Table robustness and trends since 2008.

    Science.gov (United States)

    Stephan, Carl N

    2014-03-01

    By pooling independent study means (x¯), the T-Tables use the central limit theorem and law of large numbers to average out study-specific sampling bias and instrument errors and, in turn, triangulate upon human population means (μ). Since their first publication in 2008, new data from >2660 adults have been collected (c.30% of the original sample) making a review of the T-Table's robustness timely. Updated grand means show that the new data have negligible impact on the previously published statistics: maximum change = 1.7 mm at gonion; and ≤1 mm at 93% of all landmarks measured. This confirms the utility of the 2008 T-Table as a proxy to soft tissue depth population means and, together with updated sample sizes (8851 individuals at pogonion), earmarks the 2013 T-Table as the premier mean facial soft tissue depth standard for craniofacial identification casework. The utility of the T-Table, in comparison with shorths and 75-shormaxes, is also discussed. © 2013 American Academy of Forensic Sciences.

  10. Improved estimation of the noncentrality parameter distribution from a large number of t-statistics, with applications to false discovery rate estimation in microarray data analysis.

    Science.gov (United States)

    Qu, Long; Nettleton, Dan; Dekkers, Jack C M

    2012-12-01

    Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations. © 2012, The International Biometric Society.

  11. Evaluation of extreme temperature events in northern Spain based on process control charts

    Science.gov (United States)

    Villeta, M.; Valencia, J. L.; Saá, A.; Tarquis, A. M.

    2018-02-01

    Extreme climate events have recently attracted the attention of a growing number of researchers because these events impose a large cost on agriculture and associated insurance planning. This study focuses on extreme temperature events and proposes a new method for their evaluation based on statistical process control tools, which are unusual in climate studies. A series of minimum and maximum daily temperatures for 12 geographical areas of a Spanish region between 1931 and 2009 were evaluated by applying statistical process control charts to statistically test whether evidence existed for an increase or a decrease of extreme temperature events. Specification limits were determined for each geographical area and used to define four types of extreme anomalies: lower and upper extremes for the minimum and maximum anomalies. A new binomial Markov extended process that considers the autocorrelation between extreme temperature events was generated for each geographical area and extreme anomaly type to establish the attribute control charts for the annual fraction of extreme days and to monitor the occurrence of annual extreme days. This method was used to assess the significance of changes and trends of extreme temperature events in the analysed region. The results demonstrate the effectiveness of an attribute control chart for evaluating extreme temperature events. For example, the evaluation of extreme maximum temperature events using the proposed statistical process control charts was consistent with the evidence of an increase in maximum temperatures during the last decades of the last century.

  12. Identification of rare recurrent copy number variants in high-risk autism families and their prevalence in a large ASD population.

    Directory of Open Access Journals (Sweden)

    Nori Matsunami

    Full Text Available Structural variation is thought to play a major etiological role in the development of autism spectrum disorders (ASDs, and numerous studies documenting the relevance of copy number variants (CNVs in ASD have been published since 2006. To determine if large ASD families harbor high-impact CNVs that may have broader impact in the general ASD population, we used the Affymetrix genome-wide human SNP array 6.0 to identify 153 putative autism-specific CNVs present in 55 individuals with ASD from 9 multiplex ASD pedigrees. To evaluate the actual prevalence of these CNVs as well as 185 CNVs reportedly associated with ASD from published studies many of which are insufficiently powered, we designed a custom Illumina array and used it to interrogate these CNVs in 3,000 ASD cases and 6,000 controls. Additional single nucleotide variants (SNVs on the array identified 25 CNVs that we did not detect in our family studies at the standard SNP array resolution. After molecular validation, our results demonstrated that 15 CNVs identified in high-risk ASD families also were found in two or more ASD cases with odds ratios greater than 2.0, strengthening their support as ASD risk variants. In addition, of the 25 CNVs identified using SNV probes on our custom array, 9 also had odds ratios greater than 2.0, suggesting that these CNVs also are ASD risk variants. Eighteen of the validated CNVs have not been reported previously in individuals with ASD and three have only been observed once. Finally, we confirmed the association of 31 of 185 published ASD-associated CNVs in our dataset with odds ratios greater than 2.0, suggesting they may be of clinical relevance in the evaluation of children with ASDs. Taken together, these data provide strong support for the existence and application of high-impact CNVs in the clinical genetic evaluation of children with ASD.

  13. Overview of the biology of extreme events

    Science.gov (United States)

    Gutschick, V. P.; Bassirirad, H.

    2008-12-01

    Extreme events have, variously, meteorological origins as in heat waves or precipitation extremes, or biological origins as in pest and disease eruptions (or tectonic, earth-orbital, or impact-body origins). Despite growing recognition that these events are changing in frequency and intensity, a universal model of ecological responses to these events is slow to emerge. Extreme events, negative and positive, contrast with normal events in terms of their effects on the physiology, ecology, and evolution of organisms, hence also on water, carbon, and nutrient cycles. They structure biogeographic ranges and biomes, almost surely more than mean values often used to define biogeography. They are challenging to study for obvious reasons of field-readiness but also because they are defined by sequences of driving variables such as temperature, not point events. As sequences, their statistics (return times, for example) are challenging to develop, as also from the involvement of multiple environmental variables. These statistics are not captured well by climate models. They are expected to change with climate and land-use change but our predictive capacity is currently limited. A number of tools for description and analysis of extreme events are available, if not widely applied to date. Extremes for organisms are defined by their fitness effects on those organisms, and are specific to genotypes, making them major agents of natural selection. There is evidence that effects of extreme events may be concentrated in an extended recovery phase. We review selected events covering ranges of time and magnitude, from Snowball Earth to leaf functional loss in weather events. A number of events, such as the 2003 European heat wave, evidence effects on water and carbon cycles over large regions. Rising CO2 is the recent extreme of note, for its climatic effects and consequences for growing seasons, transpiration, etc., but also directly in its action as a substrate of photosynthesis

  14. Experimental determination of Ramsey numbers.

    Science.gov (United States)

    Bian, Zhengbing; Chudak, Fabian; Macready, William G; Clark, Lane; Gaitan, Frank

    2013-09-27

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  15. Soft tissue masses of extremities: MR findings

    Energy Technology Data Exchange (ETDEWEB)

    Son, Seok Hyun; Yang, Seoung Oh; Choi, Jong Chul; Park, Byeong Ho; Lee, Ki Nam; Choi, Sun Seob; Chung, Duck Hwan [Dong-A University College of Medicine, Pusan (Korea, Republic of)

    1993-11-15

    To evaluate MR findings of soft tissue masses in extremities and to find the helpful findings of distinguish benignity from malignancy, 28 soft tissue masses (22 benign and 6 malignant) in extremities were reviewed. TI-weighted, proton density, T2-weighted and Gd-DTPA enhanced images were obtained. MR images allowed a specific diagnosis in large number of benign masses, such as hemangioma(8/9), lipoma(2/2), angiolipoma(1/1), epidermoid cyst(2/2), myositis ossificans(1/1), synovial chondromatosis(1/1) and pigmented villonodular synovitis(1/2). Specific diagnosis was difficult in the rest of the masses including malignancy. However, inhomogeneous signal intensities with necrosis and inhomogeneous enhancement may suggest malignant masses.

  16. On the calculation of line strengths, oscillator strengths and lifetimes for very large principal quantum numbers in hydrogenic atoms and ions by the McLean–Watson formula

    International Nuclear Information System (INIS)

    Hey, J D

    2014-01-01

    As a sequel to an earlier study (Hey 2009 J. Phys. B: At. Mol. Opt. Phys. 42 125701), we consider further the application of the line strength formula derived by Watson (2006 J. Phys. B: At. Mol. Opt. Phys. 39 L291) to transitions arising from states of very high principal quantum number in hydrogenic atoms and ions (Rydberg–Rydberg transitions, n > 1000). It is shown how apparent difficulties associated with the use of recurrence relations, derived (Hey 2006 J. Phys. B: At. Mol. Opt. Phys. 39 2641) by the ladder operator technique of Infeld and Hull (1951 Rev. Mod. Phys. 23 21), may be eliminated by a very simple numerical device, whereby this method may readily be applied up to n ≈ 10 000. Beyond this range, programming of the method may entail greater care and complexity. The use of the numerically efficient McLean–Watson formula for such cases is again illustrated by the determination of radiative lifetimes and comparison of present results with those from an asymptotic formula. The question of the influence on the results of the omission or inclusion of fine structure is considered by comparison with calculations based on the standard Condon–Shortley line strength formula. Interest in this work on the radial matrix elements for large n and n′ is related to measurements of radio recombination lines from tenuous space plasmas, e.g. Stepkin et al (2007 Mon. Not. R. Astron. Soc. 374 852), Bell et al (2011 Astrophys. Space Sci. 333 377), to the calculation of electron impact broadening parameters for such spectra (Watson 2006 J. Phys. B: At. Mol. Opt. Phys. 39 1889) and comparison with other theoretical methods (Peach 2014 Adv. Space Res. in press), to the modelling of physical processes in H II regions (Roshi et al 2012 Astrophys. J. 749 49), and the evaluation bound–bound transitions from states of high n during primordial cosmological recombination (Grin and Hirata 2010 Phys. Rev. D 81 083005, Ali-Haïmoud and Hirata 2010 Phys. Rev. D 82 063521

  17. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    NARCIS (Netherlands)

    J. Vogt (Julia); K. Bengesser (Kathrin); K.B.M. Claes (Kathleen B.M.); K. Wimmer (Katharina); V.-F. Mautner (Victor-Felix); R. van Minkelen (Rick); E. Legius (Eric); H. Brems (Hilde); M. Upadhyaya (Meena); J. Högel (Josef); C. Lazaro (Conxi); T. Rosenbaum (Thorsten); S. Bammert (Simone); L. Messiaen (Ludwine); D.N. Cooper (David); H. Kehrer-Sawatzki (Hildegard)

    2014-01-01

    textabstractBackground: Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The

  18. Heavy Tail Behavior of Rainfall Extremes across Germany

    Science.gov (United States)

    Castellarin, A.; Kreibich, H.; Vorogushyn, S.; Merz, B.

    2017-12-01

    Distributions are termed heavy-tailed if extreme values are more likely than would be predicted by probability distributions that have exponential asymptotic behavior. Heavy-tail behavior often leads to surprise, because historical observations can be a poor guide for the future. Heavy-tail behavior seems to be widespread for hydro-meteorological extremes, such as extreme rainfall and flood events. To date there have been only vague hints to explain under which conditions these extremes show heavy-tail behavior. We use an observational data set consisting of 11 climate variables at 1440 stations across Germany. This homogenized, gap-free data set covers 110 years (1901-2010) at daily resolution. We estimate the upper tail behavior, including its uncertainty interval, of daily precipitation extremes for the 1,440 stations at the annual and seasonal time scales. Different tail indicators are tested, including the shape parameter of the Generalized Extreme Value distribution, the upper tail ratio and the obesity index. In a further step, we explore to which extent the tail behavior can be explained by geographical and climate factors. A large number of characteristics is derived, such as station elevation, degree of continentality, aridity, measures for quantifying the variability of humidity and wind velocity, or event-triggering large-scale atmospheric situation. The link between the upper tail behavior and these characteristics is investigated via data mining methods capable of detecting non-linear relationships in large data sets. This exceptionally rich observational data set, in terms of number of stations, length of time series and number of explaining variables, allows insights into the upper tail behavior which is rarely possible given the typical observational data sets available.

  19. Rayleigh- and Prandtl-number dependence of the large-scale flow-structure in weakly-rotating turbulent thermal convection

    Science.gov (United States)

    Weiss, Stephan; Wei, Ping; Ahlers, Guenter

    2015-11-01

    Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 Deutsche Forschungsgemeinschaft.

  20. Extremely Preterm Birth

    Science.gov (United States)

    ... Events Advocacy For Patients About ACOG Extremely Preterm Birth Home For Patients Search FAQs Extremely Preterm Birth ... Spanish FAQ173, June 2016 PDF Format Extremely Preterm Birth Pregnancy When is a baby considered “preterm” or “ ...

  1. Development of special machines for production of large number of superconducting coils for the spool correctors for the main dipole of LHC

    International Nuclear Information System (INIS)

    Puntambekar, A.M.; Karmarkar, M.G.

    2003-01-01

    Superconducting (Sc) spool correctors of different types namely Sextupole, (MCS) Decapole (MCD) and Octupole (MCO) are incorporated in each of the main dipole of Large Hadron Collider (LHC). In all 2464 MCS and 1232 MCDO magnets are required to equip all 1232 Dipoles of LHC. The coils wound from thin rectangular section Sc wires are the heart of magnet assembly and its performance for the field quality and cold quench training largely depends on the precise and robust construction of these coils. Under DAE-CERN collaboration CAT was entrusted with the responsibility of making these magnets for LHC. Starting with development of manual fixtures and prototyping using soldering, a more advances special Automatic Coils Winding and Ultrasonic Welding (USW) system for production of large no. of coils and magnets were built at CAT. The paper briefly describes the various developments in this area. (author)

  2. Experimental observations of electron-backscatter effects from high-atomic-number anodes in large-aspect-ratio, electron-beam diodes

    Energy Technology Data Exchange (ETDEWEB)

    Cooperstein, G; Mosher, D; Stephanakis, S J; Weber, B V; Young, F C [Naval Research Laboratory, Washington, DC (United States); Swanekamp, S B [JAYCOR, Vienna, VA (United States)

    1997-12-31

    Backscattered electrons from anodes with high-atomic-number substrates cause early-time anode-plasma formation from the surface layer leading to faster, more intense electron beam pinching, and lower diode impedance. A simple derivation of Child-Langmuir current from a thin hollow cathode shows the same dependence on the diode aspect ratio as critical current. Using this fact, it is shown that the diode voltage and current follow relativistic Child-Langmuir theory until the anode plasma is formed, and then follows critical current after the beam pinches. With thin hollow cathodes, electron beam pinching can be suppressed at low voltages (< 800 kV) even for high currents and high-atomic-number anodes. Electron beam pinching can also be suppressed at high voltages for low-atomic-number anodes as long as the electron current densities remain below the plasma turn-on threshold. (author). 8 figs., 2 refs.

  3. Large-scale studies of the HphI insulin gene variable-number-of-tandem-repeats polymorphism in relation to Type 2 diabetes mellitus and insulin release

    DEFF Research Database (Denmark)

    Hansen, S K; Gjesing, A P; Rasmussen, S K

    2004-01-01

    The class III allele of the variable-number-of-tandem-repeats polymorphism located 5' of the insulin gene (INS-VNTR) has been associated with Type 2 diabetes and altered birthweight. It has also been suggested, although inconsistently, that the class III allele plays a role in glucose-induced ins......The class III allele of the variable-number-of-tandem-repeats polymorphism located 5' of the insulin gene (INS-VNTR) has been associated with Type 2 diabetes and altered birthweight. It has also been suggested, although inconsistently, that the class III allele plays a role in glucose...

  4. Planning Alternative Organizational Frameworks For a Large Scale Educational Telecommunications System Served by Fixed/Broadcast Satellites. Memorandum Number 73/3.

    Science.gov (United States)

    Walkmeyer, John

    Considerations relating to the design of organizational structures for development and control of large scale educational telecommunications systems using satellites are explored. The first part of the document deals with four issues of system-wide concern. The first is user accessibility to the system, including proximity to entry points, ability…

  5. The number of extranodal sites assessed by PET/CT scan is a powerful predictor of CNS relapse for patients with diffuse large B-cell lymphoma

    DEFF Research Database (Denmark)

    El-Galaly, Tarec Christoffer; Villa, Diego; Michaelsen, Thomas Yssing

    2017-01-01

    Purpose Development of secondary central nervous system involvement (SCNS) in patients with diffuse large B-cell lymphoma is associated with poor outcomes. The CNS International Prognostic Index (CNS-IPI) has been proposed for identifying patients at greatest risk, but the optimal model is unknow...

  6. Handling large numbers of observation units in three-way methods for the analysis of qualitative and quantitative two-way data

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Marchetti, G.M.

    1994-01-01

    Recently, a number of methods have been proposed for the exploratory analysis of mixtures of qualitative and quantitative variables. In these methods for each variable an object by object similarity matrix is constructed, and these are consequently analyzed by means of three-way methods like

  7. A large increase of sour taste receptor cells in Skn-1-deficient mice does not alter the number of their sour taste signal-transmitting gustatory neurons.

    Science.gov (United States)

    Maeda, Naohiro; Narukawa, Masataka; Ishimaru, Yoshiro; Yamamoto, Kurumi; Misaka, Takumi; Abe, Keiko

    2017-05-01

    The connections between taste receptor cells (TRCs) and innervating gustatory neurons are formed in a mutually dependent manner during development. To investigate whether a change in the ratio of cell types that compose taste buds influences the number of innervating gustatory neurons, we analyzed the proportion of gustatory neurons that transmit sour taste signals in adult Skn-1a -/- mice in which the number of sour TRCs is greatly increased. We generated polycystic kidney disease 1 like 3-wheat germ agglutinin (pkd1l3-WGA)/Skn-1a +/+ and pkd1l3-WGA/Skn-1a -/- mice by crossing Skn-1a -/- mice and pkd1l3-WGA transgenic mice, in which neural pathways of sour taste signals can be visualized. The number of WGA-positive cells in the circumvallate papillae is 3-fold higher in taste buds of pkd1l3-WGA/Skn-1a -/- mice relative to pkd1l3-WGA/Skn-1a +/+ mice. Intriguingly, the ratio of WGA-positive neurons to P2X 2 -expressing gustatory neurons in nodose/petrosal ganglia was similar between pkd1l3-WGA/Skn-1a +/+ and pkd1l3-WGA/Skn-1a -/- mice. In conclusion, an alteration in the ratio of cell types that compose taste buds does not influence the number of gustatory neurons that transmit sour taste signals. Copyright © 2017. Published by Elsevier B.V.

  8. Multifractal Conceptualisation of Hydro-Meteorological Extremes

    Science.gov (United States)

    Tchiguirinskaia, I.; Schertzer, D.; Lovejoy, S.

    2009-04-01

    Hydrology and more generally sciences involved in water resources management, technological or operational developments face a fundamental difficulty: the extreme variability of hydro-meteorological fields. It clearly appears today that this variability is a function of the observation scale and yield hydro-meteorological hazards. Throughout the world, the development of multifractal theory offers new techniques for handling such non-classical variability over wide ranges of time and space scales. The resulting stochastic simulations with a very limited number of parameters well reproduce the long range dependencies and the clustering of rainfall extremes often yielding fat tailed (i.e., an algebraic type) probability distributions. The goal of this work was to investigate the ability of using very short or incomplete data records for reliable statistical predictions of the extremes. In particular we discuss how to evaluate the uncertainty in the empirical or semi-analytical multifractal outcomes. We consider three main aspects of the evaluation, such as the scaling adequacy, the multifractal parameter estimation error and the quantile estimation error. We first use the multiplicative cascade model to generate long series of multifractal data. The simulated samples had to cover the range of the universal multifractal parameters widely available in the scientific literature for the rainfall and river discharges. Using these long multifractal series and their sub-samples, we defined a metric for parameter estimation error. Then using the sets of estimated parameters, we obtained the quantile values for a range of excedance probabilities from 5% to 0.01%. Plotting the error bars on a quantile plot enable an approximation of confidence intervals that would be particularly important for the predictions of multifractal extremes. We finally illustrate the efficiency of such concept on its application to a large database (more than 16000 selected stations over USA and

  9. Spatial dependence of extreme rainfall

    Science.gov (United States)

    Radi, Noor Fadhilah Ahmad; Zakaria, Roslinazairimah; Satari, Siti Zanariah; Azman, Muhammad Az-zuhri

    2017-05-01

    This study aims to model the spatial extreme daily rainfall process using the max-stable model. The max-stable model is used to capture the dependence structure of spatial properties of extreme rainfall. Three models from max-stable are considered namely Smith, Schlather and Brown-Resnick models. The methods are applied on 12 selected rainfall stations in Kelantan, Malaysia. Most of the extreme rainfall data occur during wet season from October to December of 1971 to 2012. This period is chosen to assure the available data is enough to satisfy the assumption of stationarity. The dependence parameters including the range and smoothness, are estimated using composite likelihood approach. Then, the bootstrap approach is applied to generate synthetic extreme rainfall data for all models using the estimated dependence parameters. The goodness of fit between the observed extreme rainfall and the synthetic data is assessed using the composite likelihood information criterion (CLIC). Results show that Schlather model is the best followed by Brown-Resnick and Smith models based on the smallest CLIC's value. Thus, the max-stable model is suitable to be used to model extreme rainfall in Kelantan. The study on spatial dependence in extreme rainfall modelling is important to reduce the uncertainties of the point estimates for the tail index. If the spatial dependency is estimated individually, the uncertainties will be large. Furthermore, in the case of joint return level is of interest, taking into accounts the spatial dependence properties will improve the estimation process.

  10. Forty-Five-Year Mortality Rate as a Function of the Number and Type of Psychiatric Diagnoses Found in a Large Danish Birth Cohort

    DEFF Research Database (Denmark)

    Madarasz, Wendy; Manzardo, Ann; Mortensen, Erik Lykke

    2012-01-01

    Central Psychiatric Research Registry for 8109 birth cohort members aged 45 years. Lifetime psychiatric diagnoses (International Classification of Diseases, Revision 10, group F codes, Mental and Behavioural Disorders, and one Z code) for identified subjects were organized into 14 mutually exclusive......Objective: Psychiatric comorbidities are common among psychiatric patients and typically associated with poorer clinical prognoses. Subjects of a large Danish birth cohort were used to study the relation between mortality and co-occurring psychiatric diagnoses. Method: We searched the Danish...

  11. Load Frequency Control by use of a Number of Both Heat Pump Water Heaters and Electric Vehicles in Power System with a Large Integration of Renewable Energy Sources

    Science.gov (United States)

    Masuta, Taisuke; Shimizu, Koichiro; Yokoyama, Akihiko

    In Japan, from the viewpoints of global warming countermeasures and energy security, it is expected to establish a smart grid as a power system into which a large amount of generation from renewable energy sources such as wind power generation and photovoltaic generation can be installed. Measures for the power system stability and reliability are necessary because a large integration of these renewable energy sources causes some problems in power systems, e.g. frequency fluctuation and distribution voltage rise, and Battery Energy Storage System (BESS) is one of effective solutions to these problems. Due to a high cost of the BESS, our research group has studied an application of controllable loads such as Heat Pump Water Heater (HPWH) and Electric Vehicle (EV) to the power system control for reduction of the required capacity of BESS. This paper proposes a new coordinated Load Frequency Control (LFC) method for the conventional power plants, the BESS, the HPWHs, and the EVs. The performance of the proposed LFC method is evaluated by the numerical simulations conducted on a power system model with a large integration of wind power generation and photovoltaic generation.

  12. Modelling of natural convection flows with large temperature differences: a benchmark problem for low Mach number solvers. Part. 1 reference solutions

    International Nuclear Information System (INIS)

    Le Quere, P.; Weisman, C.; Paillere, H.; Vierendeels, J.; Dick, E.; Becker, R.; Braack, M.; Locke, J.

    2005-01-01

    Heat transfer by natural convection and conduction in enclosures occurs in numerous practical situations including the cooling of nuclear reactors. For large temperature difference, the flow becomes compressible with a strong coupling between the continuity, the momentum and the energy equations through the equation of state, and its properties (viscosity, heat conductivity) also vary with the temperature, making the Boussinesq flow approximation inappropriate and inaccurate. There are very few reference solutions in the literature on non-Boussinesq natural convection flows. We propose here a test case problem which extends the well-known De Vahl Davis differentially heated square cavity problem to the case of large temperature differences for which the Boussinesq approximation is no longer valid. The paper is split in two parts: in this first part, we propose as yet unpublished reference solutions for cases characterized by a non-dimensional temperature difference of 0.6, Ra 10 6 (constant property and variable property cases) and Ra = 10 7 (variable property case). These reference solutions were produced after a first international workshop organized by Cea and LIMSI in January 2000, in which the above authors volunteered to produce accurate numerical solutions from which the present reference solutions could be established. (authors)

  13. Prediction of the number of 14 MeV neutron elastically scattered from large sample of aluminium using Monte Carlo simulation method

    International Nuclear Information System (INIS)

    Husin Wagiran; Wan Mohd Nasir Wan Kadir

    1997-01-01

    In neutron scattering processes, the effect of multiple scattering is to cause an effective increase in the measured cross-sections due to increase on the probability of neutron scattering interactions in the sample. Analysis of how the effective cross-section varies with thickness is very complicated due to complicated sample geometries and the variations of scattering cross-section with energy. Monte Carlo method is one of the possible method for treating the multiple scattering processes in the extended sample. In this method a lot of approximations have to be made and the accurate data of microscopic cross-sections are needed at various angles. In the present work, a Monte Carlo simulation programme suitable for a small computer was developed. The programme was capable to predict the number of neutrons scattered from various thickness of aluminium samples at all possible angles between 00 to 36011 with 100 increments. In order to make the the programme not too complicated and capable of being run on microcomputer with reasonable time, the calculations was done in two dimension coordinate system. The number of neutrons predicted from this model show in good agreement with previous experimental results

  14. Implementation of genomic recursions in single-step genomic best linear unbiased predictor for US Holsteins with a large number of genotyped animals.

    Science.gov (United States)

    Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J

    2016-03-01

    The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1

  15. A modification to linearized theory for prediction of pressure loadings on lifting surfaces at high supersonic Mach numbers and large angles of attack

    Science.gov (United States)

    Carlson, H. W.

    1979-01-01

    A new linearized-theory pressure-coefficient formulation was studied. The new formulation is intended to provide more accurate estimates of detailed pressure loadings for improved stability analysis and for analysis of critical structural design conditions. The approach is based on the use of oblique-shock and Prandtl-Meyer expansion relationships for accurate representation of the variation of pressures with surface slopes in two-dimensional flow and linearized-theory perturbation velocities for evaluation of local three-dimensional aerodynamic interference effects. The applicability and limitations of the modification to linearized theory are illustrated through comparisons with experimental pressure distributions for delta wings covering a Mach number range from 1.45 to 4.60 and angles of attack from 0 to 25 degrees.

  16. High Frequency Design Considerations for the Large Detector Number and Small Form Factor Dual Electron Spectrometer of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    Science.gov (United States)

    Kujawski, Joseph T.; Gliese, Ulrik B.; Cao, N. T.; Zeuch, M. A.; White, D.; Chornay, D. J; Lobell, J. V.; Avanov, L. A.; Barrie, A. C.; Mariano, A. J.; hide

    2015-01-01

    Each half of the Dual Electron Spectrometer (DES) of the Fast Plasma Investigation (FPI) on NASA's Magnetospheric MultiScale (MMS) mission utilizes a microchannel plate Chevron stack feeding 16 separate detection channels each with a dedicated anode and amplifier/discriminator chip. The desire to detect events on a single channel with a temporal spacing of 100 ns and a fixed dead-time drove our decision to use an amplifier/discriminator with a very fast (GHz class) front end. Since the inherent frequency response of each pulse in the output of the DES microchannel plate system also has frequency components above a GHz, this produced a number of design constraints not normally expected in electronic systems operating at peak speeds of 10 MHz. Additional constraints are imposed by the geometry of the instrument requiring all 16 channels along with each anode and amplifier/discriminator to be packaged in a relatively small space. We developed an electrical model for board level interactions between the detector channels to allow us to design a board topology which gave us the best detection sensitivity and lowest channel to channel crosstalk. The amplifier/discriminator output was designed to prevent the outputs from one channel from producing triggers on the inputs of other channels. A number of Radio Frequency design techniques were then applied to prevent signals from other subsystems (e.g. the high voltage power supply, command and data handling board, and Ultraviolet stimulation for the MCP) from generating false events. These techniques enabled us to operate the board at its highest sensitivity when operated in isolation and at very high sensitivity when placed into the overall system.

  17. THE MASS-LOSS RETURN FROM EVOLVED STARS TO THE LARGE MAGELLANIC CLOUD. IV. CONSTRUCTION AND VALIDATION OF A GRID OF MODELS FOR OXYGEN-RICH AGB STARS, RED SUPERGIANTS, AND EXTREME AGB STARS

    International Nuclear Information System (INIS)

    Sargent, Benjamin A.; Meixner, M.; Srinivasan, S.

    2011-01-01

    To measure the mass loss from dusty oxygen-rich (O-rich) evolved stars in the Large Magellanic Cloud (LMC), we have constructed a grid of models of spherically symmetric dust shells around stars with constant mass-loss rates using 2Dust. These models will constitute the O-rich model part of the 'Grid of Red supergiant and Asymptotic giant branch star ModelS' (GRAMS). This model grid explores four parameters-stellar effective temperature from 2100 K to 4700 K; luminosity from 10 3 to 10 6 L sun ; dust shell inner radii of 3, 7, 11, and 15 R star ; and 10.0 μm optical depth from 10 -4 to 26. From an initial grid of ∼1200 2Dust models, we create a larger grid of ∼69,000 models by scaling to cover the luminosity range required by the data. These models are available online to the public. The matching in color-magnitude diagrams and color-color diagrams to observed O-rich asymptotic giant branch (AGB) and red supergiant (RSG) candidate stars from the SAGE and SAGE-Spec LMC samples and a small sample of OH/IR stars is generally very good. The extreme AGB star candidates from SAGE are more consistent with carbon-rich (C-rich) than O-rich dust composition. Our model grid suggests lower limits to the mid-infrared colors of the dustiest AGB stars for which the chemistry could be O-rich. Finally, the fitting of GRAMS models to spectral energy distributions of sources fit by other studies provides additional verification of our grid and anticipates future, more expansive efforts.

  18. Climate Extreme Events over Northern Eurasia in Changing Climate

    Science.gov (United States)

    Bulygina, O.; Korshunova, N. N.; Razuvaev, V. N.; Groisman, P. Y.

    2014-12-01

    During the period of widespread instrumental observations in Northern Eurasia, the annual surface air temperature has increased by 1.5°C. Close to the north in the Arctic Ocean, the late summer sea ice extent has decreased by 40% providing a near-infinite source of water vapor for the dry Arctic atmosphere in the early cold season months. The contemporary sea ice changes are especially visible in the Eastern Hemisphere All these factors affect the change extreme events. Daily and sub-daily data of 940 stations to analyze variations in the space time distribution of extreme temperatures, precipitation, and wind over Russia were used. Changing in number of days with thaw over Russia was described. The total seasonal numbers of days, when daily surface air temperatures (wind, precipitation) were found to be above (below) selected thresholds, were used as indices of climate extremes. Changing in difference between maximum and minimum temperature (DTR) may produce a variety of effects on biological systems. All values falling within the intervals ranged from the lowest percentile to the 5th percentile and from the 95th percentile to the highest percentile for the time period of interest were considered as daily extremes. The number of days, N, when daily temperatures (wind, precipitation, DTR) were within the above mentioned intervals, was determined for the seasons of each year. Linear trends in the number of days were calculated for each station and for quasi-homogeneous climatic regions. Regional analysis of extreme events was carried out using quasi-homogeneous climatic regions. Maps (climatology, trends) are presented mostly for visualization purposes. Differences in regional characteristics of extreme events are accounted for over a large extent of the Russian territory and variety of its physical and geographical conditions. The number of days with maximum temperatures higher than the 95% percentile has increased in most of Russia and decreased in Siberia in

  19. Large Diversity of Porcine Yersinia enterocolitica 4/O:3 in Eight European Countries Assessed by Multiple-Locus Variable-Number Tandem-Repeat Analysis.

    Science.gov (United States)

    Alakurtti, Sini; Keto-Timonen, Riikka; Virtanen, Sonja; Martínez, Pilar Ortiz; Laukkanen-Ninios, Riikka; Korkeala, Hannu

    2016-06-01

    A total of 253 multiple-locus variable-number tandem-repeat analysis (MLVA) types among 634 isolates were discovered while studying the genetic diversity of porcine Yersinia enterocolitica 4/O:3 isolates from eight different European countries. Six variable-number tandem-repeat (VNTR) loci V2A, V4, V5, V6, V7, and V9 were used to study the isolates from 82 farms in Belgium (n = 93, 7 farms), England (n = 41, 8 farms), Estonia (n = 106, 12 farms), Finland (n = 70, 13 farms), Italy (n = 111, 20 farms), Latvia (n = 66, 3 farms), Russia (n = 60, 10 farms), and Spain (n = 87, 9 farms). Cluster analysis revealed mainly country-specific clusters, and only one MLVA type consisting of two isolates was found from two countries: Russia and Italy. Also, farm-specific clusters were discovered, but same MLVA types could also be found from different farms. Analysis of multiple isolates originating either from the same tonsils (n = 4) or from the same farm, but 6 months apart, revealed both identical and different MLVA types. MLVA showed a very good discriminatory ability with a Simpson's discriminatory index (DI) of 0.989. DIs for VNTR loci V2A, V4, V5, V6, V7, and V9 were 0.916, 0.791, 0.901, 0.877, 0.912, and 0.785, respectively, when studying all isolates together, but variation was evident between isolates originating from different countries. Locus V4 in the Spanish isolates and locus V9 in the Latvian isolates did not differentiate (DI 0.000), and locus V9 in the English isolates showed very low discriminatory power (DI 0.049). The porcine Y. enterocolitica 4/O:3 isolates were diverse, but the variation in DI demonstrates that the well discriminating loci V2A, V5, V6, and V7 should be included in MLVA protocol when maximal discriminatory power is needed.

  20. The One-carbon Carrier Methylofuran from Methylobacterium extorquens AM1 Contains a Large Number of α- and γ-Linked Glutamic Acid Residues*

    Science.gov (United States)

    Hemmann, Jethro L.; Saurel, Olivier; Ochsner, Andrea M.; Stodden, Barbara K.; Kiefer, Patrick; Milon, Alain; Vorholt, Julia A.

    2016-01-01

    Methylobacterium extorquens AM1 uses dedicated cofactors for one-carbon unit conversion. Based on the sequence identities of enzymes and activity determinations, a methanofuran analog was proposed to be involved in formaldehyde oxidation in Alphaproteobacteria. Here, we report the structure of the cofactor, which we termed methylofuran. Using an in vitro enzyme assay and LC-MS, methylofuran was identified in cell extracts and further purified. From the exact mass and MS-MS fragmentation pattern, the structure of the cofactor was determined to consist of a polyglutamic acid side chain linked to a core structure similar to the one present in archaeal methanofuran variants. NMR analyses showed that the core structure contains a furan ring. However, instead of the tyramine moiety that is present in methanofuran cofactors, a tyrosine residue is present in methylofuran, which was further confirmed by MS through the incorporation of a 13C-labeled precursor. Methylofuran was present as a mixture of different species with varying numbers of glutamic acid residues in the side chain ranging from 12 to 24. Notably, the glutamic acid residues were not solely γ-linked, as is the case for all known methanofurans, but were identified by NMR as a mixture of α- and γ-linked amino acids. Considering the unusual peptide chain, the elucidation of the structure presented here sets the basis for further research on this cofactor, which is probably the largest cofactor known so far. PMID:26895963

  1. The One-carbon Carrier Methylofuran from Methylobacterium extorquens AM1 Contains a Large Number of α- and γ-Linked Glutamic Acid Residues.

    Science.gov (United States)

    Hemmann, Jethro L; Saurel, Olivier; Ochsner, Andrea M; Stodden, Barbara K; Kiefer, Patrick; Milon, Alain; Vorholt, Julia A

    2016-04-22

    Methylobacterium extorquens AM1 uses dedicated cofactors for one-carbon unit conversion. Based on the sequence identities of enzymes and activity determinations, a methanofuran analog was proposed to be involved in formaldehyde oxidation in Alphaproteobacteria. Here, we report the structure of the cofactor, which we termed methylofuran. Using an in vitro enzyme assay and LC-MS, methylofuran was identified in cell extracts and further purified. From the exact mass and MS-MS fragmentation pattern, the structure of the cofactor was determined to consist of a polyglutamic acid side chain linked to a core structure similar to the one present in archaeal methanofuran variants. NMR analyses showed that the core structure contains a furan ring. However, instead of the tyramine moiety that is present in methanofuran cofactors, a tyrosine residue is present in methylofuran, which was further confirmed by MS through the incorporation of a (13)C-labeled precursor. Methylofuran was present as a mixture of different species with varying numbers of glutamic acid residues in the side chain ranging from 12 to 24. Notably, the glutamic acid residues were not solely γ-linked, as is the case for all known methanofurans, but were identified by NMR as a mixture of α- and γ-linked amino acids. Considering the unusual peptide chain, the elucidation of the structure presented here sets the basis for further research on this cofactor, which is probably the largest cofactor known so far. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  2. Suspect screening of large numbers of emerging contaminants in environmental waters using artificial neural networks for chromatographic retention time prediction and high resolution mass spectrometry data analysis.

    Science.gov (United States)

    Bade, Richard; Bijlsma, Lubertus; Miller, Thomas H; Barron, Leon P; Sancho, Juan Vicente; Hernández, Felix

    2015-12-15

    The recent development of broad-scope high resolution mass spectrometry (HRMS) screening methods has resulted in a much improved capability for new compound identification in environmental samples. However, positive identifications at the ng/L concentration level rely on analytical reference standards for chromatographic retention time (tR) and mass spectral comparisons. Chromatographic tR prediction can play a role in increasing confidence in suspect screening efforts for new compounds in the environment, especially when standards are not available, but reliable methods are lacking. The current work focuses on the development of artificial neural networks (ANNs) for tR prediction in gradient reversed-phase liquid chromatography and applied along with HRMS data to suspect screening of wastewater and environmental surface water samples. Based on a compound tR dataset of >500 compounds, an optimized 4-layer back-propagation multi-layer perceptron model enabled predictions for 85% of all compounds to within 2min of their measured tR for training (n=344) and verification (n=100) datasets. To evaluate the ANN ability for generalization to new data, the model was further tested using 100 randomly selected compounds and revealed 95% prediction accuracy within the 2-minute elution interval. Given the increasing concern on the presence of drug metabolites and other transformation products (TPs) in the aquatic environment, the model was applied along with HRMS data for preliminary identification of pharmaceutically-related compounds in real samples. Examples of compounds where reference standards were subsequently acquired and later confirmed are also presented. To our knowledge, this work presents for the first time, the successful application of an accurate retention time predictor and HRMS data-mining using the largest number of compounds to preliminarily identify new or emerging contaminants in wastewater and surface waters. Copyright © 2015 Elsevier B.V. All rights

  3. Extensive unusual lesions on a large number of immersed human victims found to be from cookiecutter sharks (Isistius spp.): an examination of the Yemenia plane crash.

    Science.gov (United States)

    Ribéreau-Gayon, Agathe; Rando, Carolyn; Schuliar, Yves; Chapenoire, Stéphane; Crema, Enrico R; Claes, Julien; Seret, Bernard; Maleret, Vincent; Morgan, Ruth M

    2017-03-01

    Accurate determination of the origin and timing of trauma is key in medicolegal investigations when the cause and manner of death are unknown. However, distinction between criminal and accidental perimortem trauma and postmortem modifications can be challenging when facing unidentified trauma. Postmortem examination of the immersed victims of the Yemenia airplane crash (Comoros, 2009) demonstrated the challenges in diagnosing extensive unusual circular lesions found on the corpses. The objective of this study was to identify the origin and timing of occurrence (peri- or postmortem) of the lesions.A retrospective multidisciplinary study using autopsy reports (n = 113) and postmortem digital photos (n = 3 579) was conducted. Of the 113 victims recovered from the crash, 62 (54.9 %) presented unusual lesions (n = 560) with a median number of 7 (IQR 3 ∼ 13) and a maximum of 27 per corpse. The majority of lesions were elliptic (58 %) and had an area smaller than 10 cm 2 (82.1 %). Some lesions (6.8 %) also showed clear tooth notches on their edges. These findings identified most of the lesions as consistent with postmortem bite marks from cookiecutter sharks (Isistius spp.). It suggests that cookiecutter sharks were important agents in the degradation of the corpses and thus introduced potential cognitive bias in the research of the cause and manner of death. A novel set of evidence-based identification criteria for cookiecutter bite marks on human bodies is developed to facilitate more accurate medicolegal diagnosis of cookiecutter bites.

  4. Extreme Conditions Modeling Workshop Report

    Energy Technology Data Exchange (ETDEWEB)

    Coe, R. G.; Neary, V. S.; Lawson, M. J.; Yu, Y.; Weber, J.

    2014-07-01

    Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) hosted the Wave Energy Converter (WEC) Extreme Conditions Modeling (ECM) Workshop in Albuquerque, NM on May 13th-14th, 2014. The objective of the workshop was to review the current state of knowledge on how to model WECs in extreme conditions (e.g. hurricanes and other large storms) and to suggest how U.S. Department of Energy (DOE) and national laboratory resources could be used to improve ECM methods for the benefit of the wave energy industry.

  5. Analysis of the Latitudinal Variability of Tropospheric Ozone in the Arctic Using the Large Number of Aircraft and Ozonesonde Observations in Early Summer 2008

    Science.gov (United States)

    Ancellet, Gerard; Daskalakis, Nikos; Raut, Jean Christophe; Quennehen, Boris; Ravetta, Francois; Hair, Jonathan; Tarasick, David; Schlager, Hans; Weinheimer, Andrew J.; Thompson, Anne M.; hide

    2016-01-01

    The goal of the paper are to: (1) present tropospheric ozone (O3) climatologies in summer 2008 based on a large amount of measurements, during the International Polar Year when the Polar Study using Aircraft, Remote Sensing, Surface Measurements, and Models of Climate Chemistry, Aerosols, and Transport (POLARCAT) campaigns were conducted (2) investigate the processes that determine O3 concentrations in two different regions (Canada and Greenland) that were thoroughly studied using measurements from 3 aircraft and 7 ozonesonde stations. This paper provides an integrated analysis of these observations and the discussion of the latitudinal and vertical variability of tropospheric ozone north of 55oN during this period is performed using a regional model (WFR-Chem). Ozone, CO and potential vorticity (PV) distributions are extracted from the simulation at the measurement locations. The model is able to reproduce the O3 latitudinal and vertical variability but a negative O3 bias of 6-15 ppbv is found in the free troposphere over 4 km, especially over Canada. Ozone average concentrations are of the order of 65 ppbv at altitudes above 4 km both over Canada and Greenland, while they are less than 50 ppbv in the lower troposphere. The relative influence of stratosphere-troposphere exchange (STE) and of ozone production related to the local biomass burning (BB) emissions is discussed using differences between average values of O3, CO and PV for Southern and Northern Canada or Greenland and two vertical ranges in the troposphere: 0-4 km and 4-8 km. For Canada, the model CO distribution and the weak correlation (less than 30%) of O3 and PV suggests that stratosphere troposphere exchange (STE) is not the major contribution to average tropospheric ozone at latitudes less than 70 deg N, due to the fact that local biomass burning (BB) emissions were significant during the 2008 summer period. Conversely over Greenland, significant STE is found according to the better O3 versus PV

  6. Legacies from extreme drought increase ecosystem sensitivity to future extremes

    Science.gov (United States)

    Smith, M. D.; Knapp, A.; Hoover, D. L.; Avolio, M. L.; Felton, A. J.; Wilcox, K. R.

    2016-12-01

    Climate extremes, such as drought, are increasing in frequency and intensity, and the ecological consequences of these extreme events can be substantial and widespread. Although there is still much to be learned about how ecosystems will respond to an intensification of drought, even less is known about the factors that determine post-drought recovery of ecosystem function. Such knowledge is particularly important because post-drought recovery periods can be protracted depending on the extent to which key plant populations, community structure and biogeochemical processes are affected. These drought legacies may alter ecosystem function for many years post-drought and may impact future sensitivity to climate extremes. We experimentally imposed two extreme growing season droughts in a central US grassland to assess the impacts of repeated droughts on ecosystem resistance (response) and resilience (recovery). We found that this grassland was not resistant to the first extreme drought due to reduced productivity and differential sensitivity of the co-dominant C4 grass (Andropogon gerardii) and C3 forb (Solidago canadensis) species. This differential sensitivity led to a reordering of species abundances within the plant community. Yet, despite this large shift in plant community composition, which persisted post-drought, the grassland was highly resilient post-drought, due to increased abundance of the dominant C4 grass. Because of this shift to increased C4 grass dominance, we expected that previously-droughted grassland would be more resistant to a second extreme drought. However, contrary to these expectations, previously droughted grassland was more sensitive to drought than grassland that had not experienced drought. Thus, our result suggest that legacies of drought (shift in community composition) may increase ecosystem sensitivity to future extreme events.

  7. Extreme environment electronics

    CERN Document Server

    Cressler, John D

    2012-01-01

    Unfriendly to conventional electronic devices, circuits, and systems, extreme environments represent a serious challenge to designers and mission architects. The first truly comprehensive guide to this specialized field, Extreme Environment Electronics explains the essential aspects of designing and using devices, circuits, and electronic systems intended to operate in extreme environments, including across wide temperature ranges and in radiation-intense scenarios such as space. The Definitive Guide to Extreme Environment Electronics Featuring contributions by some of the world's foremost exp

  8. Seasonal temperature extremes in Potsdam

    Science.gov (United States)

    Kundzewicz, Zbigniew; Huang, Shaochun

    2010-12-01

    The awareness of global warming is well established and results from the observations made on thousands of stations. This paper complements the large-scale results by examining a long time-series of high-quality temperature data from the Secular Meteorological Station in Potsdam, where observation records over the last 117 years, i.e., from January 1893 are available. Tendencies of change in seasonal temperature-related climate extremes are demonstrated. "Cold" extremes have become less frequent and less severe than in the past, while "warm" extremes have become more frequent and more severe. Moreover, the interval of the occurrence of frost has been decreasing, while the interval of the occurrence of hot days has been increasing. However, many changes are not statistically significant, since the variability of temperature indices at the Potsdam station has been very strong.

  9. Extremal vectors and rectifiability | Enflo | Quaestiones Mathematicae

    African Journals Online (AJOL)

    Extremal vectors and rectifiability. ... The concept of extremal vectors of a linear operator with a dense range but not onto on a Hilbert space was introduced by P. Enflo in 1996 as a new approach to study invariant subspaces ... We show that in general curves that map numbers to backward minimal vectors are not rectifiable.

  10. Those fascinating numbers

    CERN Document Server

    Koninck, Jean-Marie De

    2009-01-01

    Who would have thought that listing the positive integers along with their most remarkable properties could end up being such an engaging and stimulating adventure? The author uses this approach to explore elementary and advanced topics in classical number theory. A large variety of numbers are contemplated: Fermat numbers, Mersenne primes, powerful numbers, sublime numbers, Wieferich primes, insolite numbers, Sastry numbers, voracious numbers, to name only a few. The author also presents short proofs of miscellaneous results and constantly challenges the reader with a variety of old and new n

  11. Extreme seismicity and disaster risks: Hazard versus vulnerability (Invited)

    Science.gov (United States)

    Ismail-Zadeh, A.

    2013-12-01

    Although the extreme nature of earthquakes has been known for millennia due to the resultant devastation from many of them, the vulnerability of our civilization to extreme seismic events is still growing. It is partly because of the increase in the number of high-risk objects and clustering of populations and infrastructure in the areas prone to seismic hazards. Today an earthquake may affect several hundreds thousand lives and cause significant damage up to hundred billion dollars; it can trigger an ecological catastrophe if occurs in close vicinity to a nuclear power plant. Two types of extreme natural events can be distinguished: (i) large magnitude low probability events, and (ii) the events leading to disasters. Although the first-type events may affect earthquake-prone countries directly or indirectly (as tsunamis, landslides etc.), the second-type events occur mainly in economically less-developed countries where the vulnerability is high and the resilience is low. Although earthquake hazards cannot be reduced, vulnerability to extreme events can be diminished by monitoring human systems and by relevant laws preventing an increase in vulnerability. Significant new knowledge should be gained on extreme seismicity through observations, monitoring, analysis, modeling, comprehensive hazard assessment, prediction, and interpretations to assist in disaster risk analysis. The advanced disaster risk communication skill should be developed to link scientists, emergency management authorities, and the public. Natural, social, economic, and political reasons leading to disasters due to earthquakes will be discussed.

  12. Transcriptome and network changes in climbers at extreme altitudes.

    Directory of Open Access Journals (Sweden)

    Fang Chen

    Full Text Available Extreme altitude can induce a range of cellular and systemic responses. Although it is known that hypoxia underlies the major changes and that the physiological responses include hemodynamic changes and erythropoiesis, the molecular mechanisms and signaling pathways mediating such changes are largely unknown. To obtain a more complete picture of the transcriptional regulatory landscape and networks involved in extreme altitude response, we followed four climbers on an expedition up Mount Xixiabangma (8,012 m, and collected blood samples at four stages during the climb for mRNA and miRNA expression assays. By analyzing dynamic changes of gene networks in response to extreme altitudes, we uncovered a highly modular network with 7 modules of various functions that changed in response to extreme altitudes. The erythrocyte differentiation module is the most prominently up-regulated, reflecting increased erythrocyte differentiation from hematopoietic stem cells, probably at the expense of differentiation into other cell lineages. These changes are accompanied by coordinated down-regulation of general translation. Network topology and flow analyses also uncovered regulators known to modulate hypoxia responses and erythrocyte development, as well as unknown regulators, such as the OCT4 gene, an important regulator in stem cells and assumed to only function in stem cells. We predicted computationally and validated experimentally that increased OCT4 expression at extreme altitude can directly elevate the expression of hemoglobin genes. Our approach established a new framework for analyzing the transcriptional regulatory network from a very limited number of samples.

  13. Extreme value distributions

    CERN Document Server

    Ahsanullah, Mohammad

    2016-01-01

    The aim of the book is to give a through account of the basic theory of extreme value distributions. The book cover a wide range of materials available to date. The central ideas and results of extreme value distributions are presented. The book rwill be useful o applied statisticians as well statisticians interrested to work in the area of extreme value distributions.vmonograph presents the central ideas and results of extreme value distributions.The monograph gives self-contained of theory and applications of extreme value distributions.

  14. Are BALQSOs extreme accretors?

    Science.gov (United States)

    Yuan, M. J.; Wills, B. J.

    2002-12-01

    Broad Absorption Line (BAL) QSOs are QSOs with massive absorbing outflows up to 0.2c. Two hypothesis have been suggested in the past about the nature of BALQSOs: Every QSO might have BAL outflow with some covering factor. BALQSOs are those which happen to have outflow along our line of sight. BALQSOs have intrinsically different physical properties than non-BALQSOs. Based on BALQSO's optical emission properties and a large set of correlations linking many general QSO emission line and continuum properties, it has been suggested that BALQSOs might accrete at near Eddington limit with abundant of fuel supplies. With new BALQSO Hβ region spectroscopic observation conducted at UKIRT and re-analysis of literature data for low and high redshift non-BALQSOs, We confirm that BALQSOs have extreme Fe II and [O III] emission line properties. Using results derived from the latest QSO Hβ region reverberation mapping, we calculated Eddington ratios (˙ {M}/˙ {M}Edd) for our BAL and non-BALQSOs. The Fe II and [O III] strengths are strongly correlated with Eddington ratios. Those correlations link Eddington ratio to a large set of general QSO properties through the Boroson & Green Eigenvector 1. We find that BALQSOs have Eddington ratios close to 1. However, all high redshift, high luminosity QSOs have rather high Eddington ratios. We argue that this is a side effect from selecting the brightest objects. In fact, our high redshift sample might constitute BALQSO's high Eddington ratio orientation parent population.

  15. Gaming the Law of Large Numbers

    Science.gov (United States)

    Hoffman, Thomas R.; Snapp, Bart

    2012-01-01

    Many view mathematics as a rich and wonderfully elaborate game. In turn, games can be used to illustrate mathematical ideas. Fibber's Dice, an adaptation of the game Liar's Dice, is a fast-paced game that rewards gutsy moves and favors the underdog. It also brings to life concepts arising in the study of probability. In particular, Fibber's Dice…

  16. Hupa Numbers.

    Science.gov (United States)

    Bennett, Ruth, Ed.; And Others

    An introduction to the Hupa number system is provided in this workbook, one in a series of numerous materials developed to promote the use of the Hupa language. The book is written in English with Hupa terms used only for the names of numbers. The opening pages present the numbers from 1-10, giving the numeral, the Hupa word, the English word, and…

  17. Triangular Numbers

    Indian Academy of Sciences (India)

    Admin

    Triangular number, figurate num- ber, rangoli, Brahmagupta–Pell equation, Jacobi triple product identity. Figure 1. The first four triangular numbers. Left: Anuradha S Garge completed her PhD from. Pune University in 2008 under the supervision of Prof. S A Katre. Her research interests include K-theory and number theory.

  18. Proth Numbers

    Directory of Open Access Journals (Sweden)

    Schwarzweller Christoph

    2015-02-01

    Full Text Available In this article we introduce Proth numbers and prove two theorems on such numbers being prime [3]. We also give revised versions of Pocklington’s theorem and of the Legendre symbol. Finally, we prove Pepin’s theorem and that the fifth Fermat number is not prime.

  19. Sagan numbers

    OpenAIRE

    Mendonça, J. Ricardo G.

    2012-01-01

    We define a new class of numbers based on the first occurrence of certain patterns of zeros and ones in the expansion of irracional numbers in a given basis and call them Sagan numbers, since they were first mentioned, in a special case, by the North-american astronomer Carl E. Sagan in his science-fiction novel "Contact." Sagan numbers hold connections with a wealth of mathematical ideas. We describe some properties of the newly defined numbers and indicate directions for further amusement.

  20. Promoting Exit from Violent Extremism

    DEFF Research Database (Denmark)

    Dalgaard-Nielsen, Anja

    2013-01-01

    A number of Western countries are currently adding exit programs targeting militant Islamists to their counterterrorism efforts. Drawing on research into voluntary exit from violent extremism, this article identifies themes and issues that seem to cause doubt, leading to exit. It then provides a ...... the influence attempt as subtle as possible, use narratives and self-affirmatory strategies to reduce resistance to persuasion, and consider the possibility to promote attitudinal change via behavioral change as an alternative to seek to influence beliefs directly....

  1. Characterization and prediction of extreme events in turbulence

    Science.gov (United States)

    Fonda, Enrico; Iyer, Kartik P.; Sreenivasan, Katepalli R.

    2017-11-01

    Extreme events in Nature such as tornadoes, large floods and strong earthquakes are rare but can have devastating consequences. The predictability of these events is very limited at present. Extreme events in turbulence are the very large events in small scales that are intermittent in character. We examine events in energy dissipation rate and enstrophy which are several tens to hundreds to thousands of times the mean value. To this end we use our DNS database of homogeneous and isotropic turbulence with Taylor Reynolds numbers spanning a decade, computed with different small scale resolutions and different box sizes, and study the predictability of these events using machine learning. We start with an aggressive data augmentation to virtually increase the number of these rare events by two orders of magnitude and train a deep convolutional neural network to predict their occurrence in an independent data set. The goal of the work is to explore whether extreme events can be predicted with greater assurance than can be done by conventional methods (e.g., D.A. Donzis & K.R. Sreenivasan, J. Fluid Mech. 647, 13-26, 2010).

  2. Changes in the number of nesting pairs and breeding success of theWhite Stork Ciconia ciconia in a large city and a neighbouring rural area in South-West Poland

    Directory of Open Access Journals (Sweden)

    Kopij Grzegorz

    2017-12-01

    Full Text Available During the years 1994–2009, the number of White Stork pairs breeding in the city of Wrocław (293 km2 fluctuated between 5 pairs in 1999 and 19 pairs 2004. Most nests were clumped in two sites in the Odra river valley. Two nests were located only cca. 1 km from the city hall. The fluctuations in numbers can be linked to the availability of feeding grounds and weather. In years when grass was mowed in the Odra valley, the number of White Storks was higher than in years when the grass was left unattended. Overall, the mean number of fledglings per successful pair during the years 1995–2009 was slightly higher in the rural than in the urban area. Contrary to expectation, the mean number of fledglings per successful pair was the highest in the year of highest population density. In two rural counties adjacent to Wrocław, the number of breeding pairs was similar to that in the city in 1994/95 (15 vs. 13 pairs. However, in 2004 the number of breeding pairs in the city almost doubled compared to that in the neighboring counties (10 vs. 19 pairs. After a sharp decline between 2004 and 2008, populations in both areas were similar in 2009 (5 vs. 4 pairs, but much lower than in 1994–1995. Wrocław is probably the only large city (>100,000 people in Poland, where the White Stork has developed a sizeable, although fluctuating, breeding population. One of the most powerful role the city-nesting White Storks may play is their ability to engage directly citizens with nature and facilitate in that way environmental education and awareness.

  3. Eulerian numbers

    CERN Document Server

    Petersen, T Kyle

    2015-01-01

    This text presents the Eulerian numbers in the context of modern enumerative, algebraic, and geometric combinatorics. The book first studies Eulerian numbers from a purely combinatorial point of view, then embarks on a tour of how these numbers arise in the study of hyperplane arrangements, polytopes, and simplicial complexes. Some topics include a thorough discussion of gamma-nonnegativity and real-rootedness for Eulerian polynomials, as well as the weak order and the shard intersection order of the symmetric group. The book also includes a parallel story of Catalan combinatorics, wherein the Eulerian numbers are replaced with Narayana numbers. Again there is a progression from combinatorics to geometry, including discussion of the associahedron and the lattice of noncrossing partitions. The final chapters discuss how both the Eulerian and Narayana numbers have analogues in any finite Coxeter group, with many of the same enumerative and geometric properties. There are four supplemental chapters throughout, ...

  4. How extreme is extreme hourly precipitation?

    Science.gov (United States)

    Papalexiou, Simon Michael; Dialynas, Yannis G.; Pappas, Christoforos

    2016-04-01

    The importance of accurate representation of precipitation at fine time scales (e.g., hourly), directly associated with flash flood events, is crucial in hydrological design and prediction. The upper part of a probability distribution, known as the distribution tail, determines the behavior of extreme events. In general, and loosely speaking, tails can be categorized in two families: the subexponential and the hyperexponential family, with the first generating more intense and more frequent extremes compared to the latter. In past studies, the focus has been mainly on daily precipitation, with the Gamma distribution being the most popular model. Here, we investigate the behaviour of tails of hourly precipitation by comparing the upper part of empirical distributions of thousands of records with three general types of tails corresponding to the Pareto, Lognormal, and Weibull distributions. Specifically, we use thousands of hourly rainfall records from all over the USA. The analysis indicates that heavier-tailed distributions describe better the observed hourly rainfall extremes in comparison to lighter tails. Traditional representations of the marginal distribution of hourly rainfall may significantly deviate from observed behaviours of extremes, with direct implications on hydroclimatic variables modelling and engineering design.

  5. Extreme interplanetary rotational discontinuities at 1 AU

    Science.gov (United States)

    Lepping, R. P.; Wu, C.-C.

    2005-11-01

    This study is concerned with the identification and description of a special subset of four Wind interplanetary rotational discontinuities (from an earlier study of 134 directional discontinuities by Lepping et al. (2003)) with some "extreme" characteristics, in the sense that every case has (1) an almost planar current sheet surface, (2) a very large discontinuity angle (ω), (3) at least moderately strong normal field components (>0.8 nT), and (4) the overall set has a very broad range of transition layer thicknesses, with one being as thick as 50 RE and another at the other extreme being 1.6 RE, most being much thicker than are usually studied. Each example has a well-determined surface normal (n) according to minimum variance analysis and corroborated via time delay checking of the discontinuity with observations at IMP 8 by employing the local surface planarity. From the variance analyses, most of these cases had unusually large ratios of intermediate-to-minimum eigenvalues (λI/λmin), being on average 32 for three cases (with a fourth being much larger), indicating compact current sheet transition zones, another (the fifth) extreme property. For many years there has been a controversy as to the relative distribution of rotational (RDs) to tangential discontinuities (TDs) in the solar wind at 1 AU (and elsewhere, such as between the Sun and Earth), even to the point where some authors have suggested that RDs with large ∣Bn∣s are probably not generated or, if generated, are unstable and therefore very rare. Some of this disagreement apparently has been due to the different selection criteria used, e.g., some allowed eigenvalue ratios (λI/λmin) to be almost an order of magnitude lower than 32 in estimating n, usually introducing unacceptable error in n and therefore also in ∣Bn∣. However, we suggest that RDs may not be so rare at 1 AU, but good quality cases (where ∣Bn∣ confidently exceeds the error in ∣Bn∣) appear to be uncommon, and further

  6. Transfinite Numbers

    Indian Academy of Sciences (India)

    Transfinite Numbers. What is Infinity? S M Srivastava. In a series of revolutionary articles written during the last quarter of the nineteenth century, the great Ger- man mathematician Georg Cantor removed the age-old mistrust of infinity and created an exceptionally beau- tiful and useful theory of transfinite numbers. This is.

  7. Room for wind. An investigation into the possibilities for the erection of large numbers of wind turbines. Ruimte voor wind. Een studie naar de plaatsingsmogelijkheden van grote aantallen windturbines

    Energy Technology Data Exchange (ETDEWEB)

    Arkesteijn, L; Van Huis, G; Reckman, E

    1987-01-01

    The Dutch government aims to realize a wind power capacity in The Netherlands of 1000 MW in the year 2000. Environmental impacts of the erection of a large number of 200 kW and 1 MW wind turbines are studied. Four siting models have been developed in which attention is paid to environmental and economic aspects, the possibilities to introduce the electric power into the national power grid and the availability and reliability of enough wind. Noise pollution and danger for birds are to be avoided. The choice between the construction of wind parks where a number of wind turbines is concentrated in a small area or a more dispersed construction is somewhat difficult if all relevant factors are to be taken into consideration. Without government's interference the target of 1000 MW in the year 2000 will probably not be attained. It is therefore desirable to practise an active energy policy in favor of wind energy, for which many ways are possible.

  8. Classifying Returns as Extreme

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2014-01-01

    I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme tha...

  9. Beurling generalized numbers

    CERN Document Server

    Diamond, Harold G; Cheung, Man Ping

    2016-01-01

    "Generalized numbers" is a multiplicative structure introduced by A. Beurling to study how independent prime number theory is from the additivity of the natural numbers. The results and techniques of this theory apply to other systems having the character of prime numbers and integers; for example, it is used in the study of the prime number theorem (PNT) for ideals of algebraic number fields. Using both analytic and elementary methods, this book presents many old and new theorems, including several of the authors' results, and many examples of extremal behavior of g-number systems. Also, the authors give detailed accounts of the L^2 PNT theorem of J. P. Kahane and of the example created with H. L. Montgomery, showing that additive structure is needed for proving the Riemann hypothesis. Other interesting topics discussed are propositions "equivalent" to the PNT, the role of multiplicative convolution and Chebyshev's prime number formula for g-numbers, and how Beurling theory provides an interpretation of the ...

  10. Chocolate Numbers

    OpenAIRE

    Ji, Caleb; Khovanova, Tanya; Park, Robin; Song, Angela

    2015-01-01

    In this paper, we consider a game played on a rectangular $m \\times n$ gridded chocolate bar. Each move, a player breaks the bar along a grid line. Each move after that consists of taking any piece of chocolate and breaking it again along existing grid lines, until just $mn$ individual squares remain. This paper enumerates the number of ways to break an $m \\times n$ bar, which we call chocolate numbers, and introduces four new sequences related to these numbers. Using various techniques, we p...

  11. Number theory

    CERN Document Server

    Andrews, George E

    1994-01-01

    Although mathematics majors are usually conversant with number theory by the time they have completed a course in abstract algebra, other undergraduates, especially those in education and the liberal arts, often need a more basic introduction to the topic.In this book the author solves the problem of maintaining the interest of students at both levels by offering a combinatorial approach to elementary number theory. In studying number theory from such a perspective, mathematics majors are spared repetition and provided with new insights, while other students benefit from the consequent simpl

  12. Nice numbers

    CERN Document Server

    Barnes, John

    2016-01-01

    In this intriguing book, John Barnes takes us on a journey through aspects of numbers much as he took us on a geometrical journey in Gems of Geometry. Similarly originating from a series of lectures for adult students at Reading and Oxford University, this book touches a variety of amusing and fascinating topics regarding numbers and their uses both ancient and modern. The author intrigues and challenges his audience with both fundamental number topics such as prime numbers and cryptography, and themes of daily needs and pleasures such as counting one's assets, keeping track of time, and enjoying music. Puzzles and exercises at the end of each lecture offer additional inspiration, and numerous illustrations accompany the reader. Furthermore, a number of appendices provides in-depth insights into diverse topics such as Pascal’s triangle, the Rubik cube, Mersenne’s curious keyboards, and many others. A theme running through is the thought of what is our favourite number. Written in an engaging and witty sty...

  13. Extreme Conditions Modeling Workshop Report

    Energy Technology Data Exchange (ETDEWEB)

    Coe, Ryan Geoffrey [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Neary, Vincent Sinclair [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Lawon, Michael J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Yu, Yi-Hsiang [National Renewable Energy Lab. (NREL), Golden, CO (United States); Weber, Jochem [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2014-07-01

    Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) hosted the Wave Energy Converter (WEC) Extreme Conditions Modeling (ECM) Workshop in Albuquerque, New Mexico on May 13–14, 2014. The objective of the workshop was to review the current state of knowledge on how to numerically and experimentally model WECs in extreme conditions (e.g. large ocean storms) and to suggest how national laboratory resources could be used to improve ECM methods for the benefit of the wave energy industry. More than 30 U.S. and European WEC experts from industry, academia, and national research institutes attended the workshop, which consisted of presentations from W EC developers, invited keynote presentations from subject matter experts, breakout sessions, and a final plenary session .

  14. On causality of extreme events

    Directory of Open Access Journals (Sweden)

    Massimiliano Zanin

    2016-06-01

    Full Text Available Multiple metrics have been developed to detect causality relations between data describing the elements constituting complex systems, all of them considering their evolution through time. Here we propose a metric able to detect causality within static data sets, by analysing how extreme events in one element correspond to the appearance of extreme events in a second one. The metric is able to detect non-linear causalities; to analyse both cross-sectional and longitudinal data sets; and to discriminate between real causalities and correlations caused by confounding factors. We validate the metric through synthetic data, dynamical and chaotic systems, and data representing the human brain activity in a cognitive task. We further show how the proposed metric is able to outperform classical causality metrics, provided non-linear relationships are present and large enough data sets are available.

  15. Extreme Nonlinear Optics An Introduction

    CERN Document Server

    Wegener, Martin

    2005-01-01

    Following the birth of the laser in 1960, the field of "nonlinear optics" rapidly emerged. Today, laser intensities and pulse durations are readily available, for which the concepts and approximations of traditional nonlinear optics no longer apply. In this regime of "extreme nonlinear optics," a large variety of novel and unusual effects arise, for example frequency doubling in inversion symmetric materials or high-harmonic generation in gases, which can lead to attosecond electromagnetic pulses or pulse trains. Other examples of "extreme nonlinear optics" cover diverse areas such as solid-state physics, atomic physics, relativistic free electrons in a vacuum and even the vacuum itself. This book starts with an introduction to the field based primarily on extensions of two famous textbook examples, namely the Lorentz oscillator model and the Drude model. Here the level of sophistication should be accessible to any undergraduate physics student. Many graphical illustrations and examples are given. The followi...

  16. Climate variations and changes in extreme climate events in Russia

    International Nuclear Information System (INIS)

    Bulygina, O N; Razuvaev, V N; Korshunova, N N; Groisman, P Ya

    2007-01-01

    Daily temperature (mean, minimum and maximum) and atmospheric precipitation data from 857 stations are used to analyze variations in the space-time distribution of extreme temperatures and precipitation across Russia during the past six decades. The seasonal numbers of days (N) when daily air temperatures (diurnal temperature range, precipitation) were higher or lower than selected thresholds are used as indices of climatic extremes. Linear trends in N are calculated for each station for the time period of interest. The seasonal numbers of days (for each season) with maximum temperatures higher than the 95th percentile have increased over most of Russia, with minimum temperatures lower than the 5th percentile having decreased. A tendency for the decrease in the number of days with abnormally high diurnal temperature range is observed over most of Russia. In individual regions of Russia, however, a tendency for an increasing number of days with a large diurnal amplitude is found. The largest tendency for increasing number of days with heavy precipitation is observed in winter in Western Siberia and Yakutia

  17. Neutrino number of the universe

    International Nuclear Information System (INIS)

    Kolb, E.W.

    1981-01-01

    The influence of grand unified theories on the lepton number of the universe is reviewed. A scenario is presented for the generation of a large (>> 1) lepton number and a small (<< 1) baryon number. 15 references

  18. Extreme river flow dependence in Northern Scotland

    Science.gov (United States)

    Villoria, M. Franco; Scott, M.; Hoey, T.; Fischbacher-Smith, D.

    2012-04-01

    Various methods for the spatial analysis of hydrologic data have been developed recently. Here we present results using the conditional probability approach proposed by Keef et al. [Appl. Stat. (2009): 58,601-18] to investigate spatial interdependence in extreme river flows in Scotland. This approach does not require the specification of a correlation function, being mostly suitable for relatively small geographical areas. The work is motivated by the Flood Risk Management Act (Scotland (2009)) which requires maps of flood risk that take account of spatial dependence in extreme river flow. The method is based on two conditional measures of spatial flood risk: firstly the conditional probability PC(p) that a set of sites Y = (Y 1,...,Y d) within a region C of interest exceed a flow threshold Qp at time t (or any lag of t), given that in the specified conditioning site X > Qp; and, secondly the expected number of sites within C that will exceed a flow Qp on average (given that X > Qp). The conditional probabilities are estimated using the conditional distribution of Y |X = x (for large x), which can be modeled using a semi-parametric approach (Heffernan and Tawn [Roy. Statist. Soc. Ser. B (2004): 66,497-546]). Once the model is fitted, pseudo-samples can be generated to estimate functionals of the joint tails of the distribution of (Y,X). Conditional return level plots were directly compared to traditional return level plots thus improving our understanding of the dependence structure of extreme river flow events. Confidence intervals were calculated using block bootstrapping methods (100 replicates). We report results from applying this approach to a set of four rivers (Dulnain, Lossie, Ewe and Ness) in Northern Scotland. These sites were chosen based on data quality, spatial location and catchment characteristics. The river Ness, being the largest (catchment size 1839.1km2) was chosen as the conditioning river. Both the Ewe (441.1km2) and Ness catchments have

  19. Number names and number understanding

    DEFF Research Database (Denmark)

    Ejersbo, Lisser Rye; Misfeldt, Morten

    2014-01-01

    This paper concerns the results from the first year of a three-year research project involving the relationship between Danish number names and their corresponding digits in the canonical base 10 system. The project aims to develop a system to help the students’ understanding of the base 10 syste...... the Danish number names are more complicated than in other languages. Keywords: A research project in grade 0 and 1th in a Danish school, Base-10 system, two-digit number names, semiotic, cognitive perspectives....

  20. Extremal surface barriers

    International Nuclear Information System (INIS)

    Engelhardt, Netta; Wall, Aron C.

    2014-01-01

    We present a generic condition for Lorentzian manifolds to have a barrier that limits the reach of boundary-anchored extremal surfaces of arbitrary dimension. We show that any surface with nonpositive extrinsic curvature is a barrier, in the sense that extremal surfaces cannot be continuously deformed past it. Furthermore, the outermost barrier surface has nonnegative extrinsic curvature. Under certain conditions, we show that the existence of trapped surfaces implies a barrier, and conversely. In the context of AdS/CFT, these barriers imply that it is impossible to reconstruct the entire bulk using extremal surfaces. We comment on the implications for the firewall controversy

  1. Funny Numbers

    Directory of Open Access Journals (Sweden)

    Theodore M. Porter

    2012-12-01

    Full Text Available The struggle over cure rate measures in nineteenth-century asylums provides an exemplary instance of how, when used for official assessments of institutions, these numbers become sites of contestation. The evasion of goals and corruption of measures tends to make these numbers “funny” in the sense of becoming dis-honest, while the mismatch between boring, technical appearances and cunning backstage manipulations supplies dark humor. The dangers are evident in recent efforts to decentralize the functions of governments and corporations using incen-tives based on quantified targets.

  2. Transcendental numbers

    CERN Document Server

    Murty, M Ram

    2014-01-01

    This book provides an introduction to the topic of transcendental numbers for upper-level undergraduate and graduate students. The text is constructed to support a full course on the subject, including descriptions of both relevant theorems and their applications. While the first part of the book focuses on introducing key concepts, the second part presents more complex material, including applications of Baker’s theorem, Schanuel’s conjecture, and Schneider’s theorem. These later chapters may be of interest to researchers interested in examining the relationship between transcendence and L-functions. Readers of this text should possess basic knowledge of complex analysis and elementary algebraic number theory.

  3. Templates, Numbers & Watercolors.

    Science.gov (United States)

    Clemesha, David J.

    1990-01-01

    Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)

  4. Transfinite Numbers

    Indian Academy of Sciences (India)

    this is a characteristic difference between finite and infinite sets and created an immensely useful branch of mathematics based on this idea which had a great impact on the whole of mathe- matics. For example, the question of what is a number (finite or infinite) is almost a philosophical one. However Cantor's work turned it ...

  5. Statistics of Extremes

    KAUST Repository

    Davison, Anthony C.; Huser, Raphaë l

    2015-01-01

    Statistics of extremes concerns inference for rare events. Often the events have never yet been observed, and their probabilities must therefore be estimated by extrapolation of tail models fitted to available data. Because data concerning the event

  6. Analysis of extreme events

    CSIR Research Space (South Africa)

    Khuluse, S

    2009-04-01

    Full Text Available ) determination of the distribution of the damage and (iii) preparation of products that enable prediction of future risk events. The methodology provided by extreme value theory can also be a powerful tool in risk analysis...

  7. Acute lower extremity ischaemia

    African Journals Online (AJOL)

    Acute lower extremity ischaemia. Acute lower limb ischaemia is a surgical emergency. ... is ~1.5 cases per 10 000 persons per year. Acute ischaemia ... Table 2. Clinical features discriminating embolic from thrombotic ALEXI. Clinical features.

  8. Long term oscillations in Danish rainfall extremes

    DEFF Research Database (Denmark)

    Gregersen, Ida Bülow; Madsen, Henrik; Rosbjerg, Dan

    The frequent flooding of European cities within the last decade has motivated a vast number of studies, among others addressing the non-stationary behaviour of hydrological extremes driven by anthropogenic climate change. However, when considering future extremes it also becomes relevant to search...... for and understand natural variations on which the anthropogenic changes are imposed. This study identifies multi-decadal variations in six 137-years-long diurnal rainfall series from Denmark and southern Sweden, focusing on extremes with a reoccurrence level relevant for Danish drainage design. By means of a Peak...

  9. Preconditioned iterations to calculate extreme eigenvalues

    Energy Technology Data Exchange (ETDEWEB)

    Brand, C.W.; Petrova, S. [Institut fuer Angewandte Mathematik, Leoben (Austria)

    1994-12-31

    Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.

  10. Attribution of climate extreme events

    Science.gov (United States)

    Trenberth, Kevin E.; Fasullo, John T.; Shepherd, Theodore G.

    2015-08-01

    There is a tremendous desire to attribute causes to weather and climate events that is often challenging from a physical standpoint. Headlines attributing an event solely to either human-induced climate change or natural variability can be misleading when both are invariably in play. The conventional attribution framework struggles with dynamically driven extremes because of the small signal-to-noise ratios and often uncertain nature of the forced changes. Here, we suggest that a different framing is desirable, which asks why such extremes unfold the way they do. Specifically, we suggest that it is more useful to regard the extreme circulation regime or weather event as being largely unaffected by climate change, and question whether known changes in the climate system's thermodynamic state affected the impact of the particular event. Some examples briefly illustrated include 'snowmaggedon' in February 2010, superstorm Sandy in October 2012 and supertyphoon Haiyan in November 2013, and, in more detail, the Boulder floods of September 2013, all of which were influenced by high sea surface temperatures that had a discernible human component.

  11. Extreme Programming: Maestro Style

    Science.gov (United States)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2009-01-01

    "Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme

  12. Asymptotic numbers: Pt.1

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1980-01-01

    The set of asymptotic numbers A as a system of generalized numbers including the system of real numbers R, as well as infinitely small (infinitesimals) and infinitely large numbers, is introduced. The detailed algebraic properties of A, which are unusual as compared with the known algebraic structures, are studied. It is proved that the set of asymptotic numbers A cannot be isomorphically embedded as a subspace in any group, ring or field, but some particular subsets of asymptotic numbers are shown to be groups, rings, and fields. The algebraic operation, additive and multiplicative forms, and the algebraic properties are constructed in an appropriate way. It is shown that the asymptotic numbers give rise to a new type of generalized functions quite analogous to the distributions of Schwartz allowing, however, the operation multiplication. A possible application of these functions to quantum theory is discussed

  13. Extreme Weather and Climate: Workshop Report

    Science.gov (United States)

    Sobel, Adam; Camargo, Suzana; Debucquoy, Wim; Deodatis, George; Gerrard, Michael; Hall, Timothy; Hallman, Robert; Keenan, Jesse; Lall, Upmanu; Levy, Marc; hide

    2016-01-01

    Extreme events are the aspects of climate to which human society is most sensitive. Due to both their severity and their rarity, extreme events can challenge the capacity of physical, social, economic and political infrastructures, turning natural events into human disasters. Yet, because they are low frequency events, the science of extreme events is very challenging. Among the challenges is the difficulty of connecting extreme events to longer-term, large-scale variability and trends in the climate system, including anthropogenic climate change. How can we best quantify the risks posed by extreme weather events, both in the current climate and in the warmer and different climates to come? How can we better predict them? What can we do to reduce the harm done by such events? In response to these questions, the Initiative on Extreme Weather and Climate has been created at Columbia University in New York City (extreme weather.columbia.edu). This Initiative is a University-wide activity focused on understanding the risks to human life, property, infrastructure, communities, institutions, ecosystems, and landscapes from extreme weather events, both in the present and future climates, and on developing solutions to mitigate those risks. In May 2015,the Initiative held its first science workshop, entitled Extreme Weather and Climate: Hazards, Impacts, Actions. The purpose of the workshop was to define the scope of the Initiative and tremendously broad intellectual footprint of the topic indicated by the titles of the presentations (see Table 1). The intent of the workshop was to stimulate thought across disciplinary lines by juxtaposing talks whose subjects differed dramatically. Each session concluded with question and answer panel sessions. Approximately, 150 people were in attendance throughout the day. Below is a brief synopsis of each presentation. The synopses collectively reflect the variety and richness of the emerging extreme event research agenda.

  14. Extreme meteorological conditions

    International Nuclear Information System (INIS)

    Altinger de Schwarzkopf, M.L.

    1983-01-01

    Different meteorological variables which may reach significant extreme values, such as the windspeed and, in particular, its occurrence through tornadoes and hurricanes that necesarily incide and wich must be taken into account at the time of nuclear power plants' installation, are analyzed. For this kind of study, it is necessary to determine the basic phenomenum of design. Two criteria are applied to define the basic values of design for extreme meteorological variables. The first one determines the expected extreme value: it is obtained from analyzing the recurence of the phenomenum in a convened period of time, wich may be generally of 50 years. The second one determines the extreme value of low probability, taking into account the nuclear power plant's operating life -f.ex. 25 years- and considering, during said lapse, the occurrence probabilities of extreme meteorological phenomena. The values may be determined either by the deterministic method, which is based on the acknowledgement of the fundamental physical characteristics of the phenomena or by the probabilistic method, that aims to the analysis of historical statistical data. Brief comments are made on the subject in relation to the Argentine Republic area. (R.J.S.) [es

  15. Acclimatization to extreme heat

    Science.gov (United States)

    Warner, M. E.; Ganguly, A. R.; Bhatia, U.

    2017-12-01

    Heat extremes throughout the globe, as well as in the United States, are expected to increase. These heat extremes have been shown to impact human health, resulting in some of the highest levels of lives lost as compared with similar natural disasters. But in order to inform decision makers and best understand future mortality and morbidity, adaptation and mitigation must be considered. Defined as the ability for individuals or society to change behavior and/or adapt physiologically, acclimatization encompasses the gradual adaptation that occurs over time. Therefore, this research aims to account for acclimatization to extreme heat by using a hybrid methodology that incorporates future air conditioning use and installation patterns with future temperature-related time series data. While previous studies have not accounted for energy usage patterns and market saturation scenarios, we integrate such factors to compare the impact of air conditioning as a tool for acclimatization, with a particular emphasis on mortality within vulnerable communities.

  16. Extremely deformable structures

    CERN Document Server

    2015-01-01

    Recently, a new research stimulus has derived from the observation that soft structures, such as biological systems, but also rubber and gel, may work in a post critical regime, where elastic elements are subject to extreme deformations, though still exhibiting excellent mechanical performances. This is the realm of ‘extreme mechanics’, to which this book is addressed. The possibility of exploiting highly deformable structures opens new and unexpected technological possibilities. In particular, the challenge is the design of deformable and bi-stable mechanisms which can reach superior mechanical performances and can have a strong impact on several high-tech applications, including stretchable electronics, nanotube serpentines, deployable structures for aerospace engineering, cable deployment in the ocean, but also sensors and flexible actuators and vibration absorbers. Readers are introduced to a variety of interrelated topics involving the mechanics of extremely deformable structures, with emphasis on ...

  17. Statistics of Extremes

    KAUST Repository

    Davison, Anthony C.

    2015-04-10

    Statistics of extremes concerns inference for rare events. Often the events have never yet been observed, and their probabilities must therefore be estimated by extrapolation of tail models fitted to available data. Because data concerning the event of interest may be very limited, efficient methods of inference play an important role. This article reviews this domain, emphasizing current research topics. We first sketch the classical theory of extremes for maxima and threshold exceedances of stationary series. We then review multivariate theory, distinguishing asymptotic independence and dependence models, followed by a description of models for spatial and spatiotemporal extreme events. Finally, we discuss inference and describe two applications. Animations illustrate some of the main ideas. © 2015 by Annual Reviews. All rights reserved.

  18. Adventure and Extreme Sports.

    Science.gov (United States)

    Gomez, Andrew Thomas; Rao, Ashwin

    2016-03-01

    Adventure and extreme sports often involve unpredictable and inhospitable environments, high velocities, and stunts. These activities vary widely and include sports like BASE jumping, snowboarding, kayaking, and surfing. Increasing interest and participation in adventure and extreme sports warrants understanding by clinicians to facilitate prevention, identification, and treatment of injuries unique to each sport. This article covers alpine skiing and snowboarding, skateboarding, surfing, bungee jumping, BASE jumping, and whitewater sports with emphasis on epidemiology, demographics, general injury mechanisms, specific injuries, chronic injuries, fatality data, and prevention. Overall, most injuries are related to overuse, trauma, and environmental or microbial exposure. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Extremal graph theory

    CERN Document Server

    Bollobas, Bela

    2004-01-01

    The ever-expanding field of extremal graph theory encompasses a diverse array of problem-solving methods, including applications to economics, computer science, and optimization theory. This volume, based on a series of lectures delivered to graduate students at the University of Cambridge, presents a concise yet comprehensive treatment of extremal graph theory.Unlike most graph theory treatises, this text features complete proofs for almost all of its results. Further insights into theory are provided by the numerous exercises of varying degrees of difficulty that accompany each chapter. A

  20. Extremely low temperature properties of epoxy GFRP

    International Nuclear Information System (INIS)

    Kadotani, Kenzo; Nagai, Matao; Aki, Fumitake.

    1983-01-01

    The examination of fiber-reinforced plastics, that is, plastics such as epoxy, polyester and polyimide reinforced with high strength fibers such as glass, carbon, boron and steel, for extremely low temperature use began from the fuel tanks of rockets. Therafter, the trial manufacture of superconducting generators and extremely low temperature transformers and the manufacture of superconducting magnets for nuclear fusion experimental setups became active, and high performance FRPs have been adopted, of which the extremely low temperature properties have been sufficiently grasped. Recently, the cryostats made of FRPs have been developed, fully utilizing such features of FRPs as high strength, high rigidity, non-magnetic material, insulation, low heat conductivity, light weight and the freedom of molding. In this paper, the mechanical properties at extremely low temperature of the plastic composite materials used as insulators and structural materials for extremely low temperature superconducting equipment is outlined, and in particular, glass fiber-reinforced epoxy laminates are described somewhat in detail. The fracture strain of GFRP at extremely low temperature is about 1.3 times as large as that at room temperature, but at extremely low temperature, clear cracking occurred at 40% of the fracture strain. The linear thermal contraction of GFRP showed remarkable anisotropy. (Kako, I.)

  1. Atomic and electronic structures of an extremely fragile liquid.

    Science.gov (United States)

    Kohara, Shinji; Akola, Jaakko; Patrikeev, Leonid; Ropo, Matti; Ohara, Koji; Itou, Masayoshi; Fujiwara, Akihiko; Yahiro, Jumpei; Okada, Junpei T; Ishikawa, Takehiko; Mizuno, Akitoshi; Masuno, Atsunobu; Watanabe, Yasuhiro; Usuki, Takeshi

    2014-12-18

    The structure of high-temperature liquids is an important topic for understanding the fragility of liquids. Here we report the structure of a high-temperature non-glass-forming oxide liquid, ZrO2, at an atomistic and electronic level. The Bhatia-Thornton number-number structure factor of ZrO2 does not show a first sharp diffraction peak. The atomic structure comprises ZrO5, ZrO6 and ZrO7 polyhedra with a significant contribution of edge sharing of oxygen in addition to corner sharing. The variety of large oxygen coordination and polyhedral connections with short Zr-O bond lifetimes, induced by the relatively large ionic radius of zirconium, disturbs the evolution of intermediate-range ordering, which leads to a reduced electronic band gap and increased delocalization in the ionic Zr-O bonding. The details of the chemical bonding explain the extremely low viscosity of the liquid and the absence of a first sharp diffraction peak, and indicate that liquid ZrO2 is an extremely fragile liquid.

  2. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  3. Extremal vacuum black holes in higher dimensions

    International Nuclear Information System (INIS)

    Figueras, Pau; Lucietti, James; Rangamani, Mukund; Kunduri, Hari K.

    2008-01-01

    We consider extremal black hole solutions to the vacuum Einstein equations in dimensions greater than five. We prove that the near-horizon geometry of any such black hole must possess an SO(2,1) symmetry in a special case where one has an enhanced rotational symmetry group. We construct examples of vacuum near-horizon geometries using the extremal Myers-Perry black holes and boosted Myers-Perry strings. The latter lead to near-horizon geometries of black ring topology, which in odd spacetime dimensions have the correct number of rotational symmetries to describe an asymptotically flat black object. We argue that a subset of these correspond to the near-horizon limit of asymptotically flat extremal black rings. Using this identification we provide a conjecture for the exact 'phase diagram' of extremal vacuum black rings with a connected horizon in odd spacetime dimensions greater than five.

  4. Prospect for extreme field science

    Energy Technology Data Exchange (ETDEWEB)

    Tajima, T. [Ludwig Maximilian Univ. and Max Planck Institute for Quantum Optics, Garching (Germany); Japan Atomic Energy Agency, Kyoto and KEK, Tsukuba (Japan)

    2009-11-15

    The kind of laser extreme light infrastructure (ELI) provides will usher in a class of experiments we have only dreamed of for years. The characteristics that ELI brings in include: the highest intensity ever, large fluence, and relatively high repetition rate. A personal view of the author on the prospect of harnessing this unprecedented opportunity for advancing science of extreme fields is presented. The first characteristic of ELI, its intensity, will allow us to access, as many have stressed already, extreme fields that hover around the Schwinger field or at the very least the neighboring fields in which vacuum begins to behave as a nonlinear medium. In this sense, we are seriously probing the 'material' property of vacuum and thus the property that theory of relativity itself described and will entail. We will probe both special theory and general theory of relativity in regimes that have been never tested so far. We may see a glimpse into the reach of relativity or even its breakdown in some extreme regimes. We will learn Einstein and may even go beyond Einstein, if our journey is led. Laser-driven acceleration both by the laser field itself and by the wakefield that is triggered in a plasma is huge. Energies, if not luminosity, we can access, may be unprecedented going far beyond TeV. The nice thing about ELI is that it has relatively high repetition rate and average fluence as compared with other extreme lasers. This high fluence can be a key element that leads to applications to high energy physics, such as gamma-gamma collider driver experiment, and some gamma ray experiments that may be relevant in the frontier of photo-nuclear physics, and atomic energy applications. Needless to say, high fluence is one of most important features that industrial and medical applications may need. If we are lucky, we may see a door opens at the frontier of novel physics that may not be available by any other means. (authors)

  5. Stellar extreme ultraviolet astronomy

    International Nuclear Information System (INIS)

    Cash, W.C. Jr.

    1978-01-01

    The design, calibration, and launch of a rocket-borne imaging telescope for extreme ultraviolet astronomy are described. The telescope, which employed diamond-turned grazing incidence optics and a ranicon detector, was launched November 19, 1976, from the White Sands Missile Range. The telescope performed well and returned data on several potential stellar sources of extreme ultraviolet radiation. Upper limits ten to twenty times more sensitive than previously available were obtained for the extreme ultraviolet flux from the white dwarf Sirius B. These limits fall a factor of seven below the flux predicted for the star and demonstrate that the temperature of Sirius B is not 32,000 K as previously measured, but is below 30,000 K. The new upper limits also rule out the photosphere of the white dwarf as the source of the recently reported soft x-rays from Sirius. Two other white dwarf stars, Feige 24 and G191-B2B, were observed. Upper limits on the flux at 300 A were interpreted as lower limits on the interstellar hydrogen column densities to these stars. The lower limits indicate interstellar hydrogen densitites of greater than .02 cm -3 . Four nearby stars (Sirius, Procyon, Capella, and Mirzam) were observed in a search for intense low temperature coronae or extended chromospheres. No extreme ultraviolet radiation from these stars was detected, and upper limits to their coronal emisson measures are derived

  6. Extremity x-ray

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003461.htm Extremity x-ray To use the sharing features on this page, ... in the body Risks There is low-level radiation exposure. X-rays are monitored and regulated to provide the ...

  7. Extremity perfusion for sarcoma

    NARCIS (Netherlands)

    Hoekstra, Harald Joan

    2008-01-01

    For more than 50 years, the technique of extremity perfusion has been explored in the limb salvage treatment of local, recurrent, and multifocal sarcomas. The "discovery" of tumor necrosis factor-or. in combination with melphalan was a real breakthrough in the treatment of primarily irresectable

  8. Statistics of Local Extremes

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Bierbooms, W.; Hansen, Kurt Schaldemose

    2003-01-01

    . A theoretical expression for the probability density function associated with local extremes of a stochasticprocess is presented. The expression is basically based on the lower four statistical moments and a bandwidth parameter. The theoretical expression is subsequently verified by comparison with simulated...

  9. Modeling, Forecasting and Mitigating Extreme Earthquakes

    Science.gov (United States)

    Ismail-Zadeh, A.; Le Mouel, J.; Soloviev, A.

    2012-12-01

    Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

  10. Extremity salvage with a free musculocutaneous latissimus dorsi flap and free tendon transfer after resection of a large congenital fibro sarcoma in a 15-week-old infant. A case report.

    Science.gov (United States)

    Germann, G; Waag, K-L; Selle, B; Jester, A

    2006-01-01

    A case of complex microsurgical reconstruction of the dorsum of the foot, including tendon transfer following tumor resection, in a 15-week-old male infant is presented. After birth, a 5.5 x 4 cm large tumor was observed on the dorsum of the right foot. Biopsy showed a congenital malignant fibro sarcoma. After initial chemotherapy a radical excision of the tumor at the age of 14 weeks was followed. To cover the defect a musculocutaneous latissimus dorsi flap was taken, the cutaneous part being large enough to cover the defect. Extensor tendons were reconstructed with free tendon transplants. Amputation is usually indicated in these cases. To the best of our knowledge, microsurgical reconstruction in infants at this age with congenital malignant tumors has not yet been reported. The case shows that Plastic surgery can play an important role in pediatric oncology and should routinely be integrated into the multi-modal treatment concepts. (c) 2006 Wiley-Liss, Inc. Microsurgery, 2006.

  11. Analyses of Observed and Anticipated Changes in Extreme Climate Events in the Northwest Himalaya

    Directory of Open Access Journals (Sweden)

    Dharmaveer Singh

    2016-02-01

    Full Text Available In this study, past (1970-2005 as well as future long term (2011-2099 trends in various extreme events of temperature and precipitation have been investigated over selected hydro-meteorological stations in the Sutlej river basin. The ensembles of two Coupled Model Intercomparison Project (CMIP3 models: third generation Canadian Coupled Global Climate Model and Hadley Centre Coupled Model have been used for simulation of future daily time series of temperature (maximum and minimum and precipitation under A2 emission scenario. Large scale atmospheric variables of both models and National Centre for Environmental Prediction/National Centre for Atmospheric Research reanalysis data sets have been downscaled using statistical downscaling technique at individual stations. A total number of 25 extreme indices of temperature (14 and precipitation (11 as specified by the Expert Team of the World Meteorological Organization and Climate Variability and Predictability are derived for the past and future periods. Trends in extreme indices are detected over time using the modified Mann-Kendall test method. The stations which have shown either decrease or no change in hot extreme events (i.e., maximum TMax, warm days, warm nights, maximum TMin, tropical nights, summer days and warm spell duration indicators for 1970–2005 and increase in cold extreme events (cool days, cool nights, frost days and cold spell duration indicators are predicted to increase and decrease respectively in the future. In addition, an increase in frequency and intensity of extreme precipitation events is also predicted.

  12. Sequences of extremal radially excited rotating black holes.

    Science.gov (United States)

    Blázquez-Salcedo, Jose Luis; Kunz, Jutta; Navarro-Lérida, Francisco; Radu, Eugen

    2014-01-10

    In the Einstein-Maxwell-Chern-Simons theory the extremal Reissner-Nordström solution is no longer the single extremal solution with vanishing angular momentum, when the Chern-Simons coupling constant reaches a critical value. Instead a whole sequence of rotating extremal J=0 solutions arises, labeled by the node number of the magnetic U(1) potential. Associated with the same near horizon solution, the mass of these radially excited extremal solutions converges to the mass of the extremal Reissner-Nordström solution. On the other hand, not all near horizon solutions are also realized as global solutions.

  13. Extreme Networks' 10-Gigabit Ethernet enables

    CERN Multimedia

    2002-01-01

    " Extreme Networks, Inc.'s 10-Gigabit switching platform enabled researchers to transfer one Terabyte of information from Vancouver to Geneva across a single network hop, the world's first large-scale, end-to-end transfer of its kind" (1/2 page).

  14. Extremes in nature

    CERN Document Server

    Salvadori, Gianfausto; Kottegoda, Nathabandu T

    2007-01-01

    This book is about the theoretical and practical aspects of the statistics of Extreme Events in Nature. Most importantly, this is the first text in which Copulas are introduced and used in Geophysics. Several topics are fully original, and show how standard models and calculations can be improved by exploiting the opportunities offered by Copulas. In addition, new quantities useful for design and risk assessment are introduced.

  15. Rhabdomyosarcoma of the extremity

    International Nuclear Information System (INIS)

    Rao, Bhaskar N

    1997-01-01

    Rhabdomyosarcoma is the most common soft tissue sarcoma accounting for almost 55%. These tumors arise from unsegmented mesoderm or primitive mesenchyma, which have the capacity to differentiate into muscle. Less than 5% occur in the first year of life. Extremity rhabdomyosarcoma are mainly seen in the adolescent years. The most common histologic subtype is the alveolar variant. Other characteristics of extremity rhabdomyosarcoma include a predilection for lymph node metastasis, a high local failure, and a relatively low survival rate. They often present as slow painless masses; however, lesions in the hand and foot often present as painful masses and imaging studies may show invasion of the bone. Initial diagnostic approaches include needle biopsy or incisional biopsy for larger lesions. Excisional biopsy is indicated preferably for lesions less than 2.5 cm. following this in most instances therapy is initiated with multi agent chemotherapy depending upon response, the next modality may be either surgery with intent to cure or radiation therapy. Amputation of an extremity for local control is not considered in most instances. Prognostic factors that have been determined over the years to be of significance by multi variant analysis have included age, tumor size, invasiveness, presence of either nodal or distant metastasis, and complete excision whenever feasible, with supplemental radiation therapy for local control

  16. Coherence techniques at extreme ultraviolet wavelengths

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chang [Univ. of California, Berkeley, CA (United States)

    2002-01-01

    The renaissance of Extreme Ultraviolet (EUV) and soft x-ray (SXR) optics in recent years is mainly driven by the desire of printing and observing ever smaller features, as in lithography and microscopy. This attribute is complemented by the unique opportunity for element specific identification presented by the large number of atomic resonances, essentially for all materials in this range of photon energies. Together, these have driven the need for new short-wavelength radiation sources (e.g. third generation synchrotron radiation facilities), and novel optical components, that in turn permit new research in areas that have not yet been fully explored. This dissertation is directed towards advancing this new field by contributing to the characterization of spatial coherence properties of undulator radiation and, for the first time, introducing Fourier optical elements to this short-wavelength spectral region. The first experiment in this dissertation uses the Thompson-Wolf two-pinhole method to characterize the spatial coherence properties of the undulator radiation at Beamline 12 of the Advanced Light Source. High spatial coherence EUV radiation is demonstrated with appropriate spatial filtering. The effects of small vertical source size and beamline apertures are observed. The difference in the measured horizontal and vertical coherence profile evokes further theoretical studies on coherence propagation of an EUV undulator beamline. A numerical simulation based on the Huygens-Fresnel principle is performed.

  17. Detection strategies for extreme mass ratio inspirals

    International Nuclear Information System (INIS)

    Cornish, Neil J

    2011-01-01

    The capture of compact stellar remnants by galactic black holes provides a unique laboratory for exploring the near-horizon geometry of the Kerr spacetime, or possible departures from general relativity if the central cores prove not to be black holes. The gravitational radiation produced by these extreme mass ratio inspirals (EMRIs) encodes a detailed map of the black hole geometry, and the detection and characterization of these signals is a major scientific goal for the LISA mission. The waveforms produced are very complex, and the signals need to be coherently tracked for tens of thousands of cycles to produce a detection, making EMRI signals one of the most challenging data analysis problems in all of gravitational wave astronomy. Estimates for the number of templates required to perform an exhaustive grid-based matched-filter search for these signals are astronomically large, and far out of reach of current computational resources. Here I describe an alternative approach that employs a hybrid between genetic algorithms and Markov chain Monte Carlo techniques, along with several time-saving techniques for computing the likelihood function. This approach has proven effective at the blind extraction of relatively weak EMRI signals from simulated LISA data sets.

  18. Prevention of Lower Extremity Injuries in Basketball

    Science.gov (United States)

    Taylor, Jeffrey B.; Ford, Kevin R.; Nguyen, Anh-Dung; Terry, Lauren N.; Hegedus, Eric J.

    2015-01-01

    Context: Lower extremity injuries are common in basketball, yet it is unclear how prophylactic interventions affect lower extremity injury incidence rates. Objective: To analyze the effectiveness of current lower extremity injury prevention programs in basketball athletes, focusing on injury rates of (1) general lower extremity injuries, (2) ankle sprains, and (3) anterior cruciate ligament (ACL) tears. Data Sources: PubMed, MEDLINE, CINAHL, SPORTDiscus, and the Cochrane Register of Controlled Trials were searched in January 2015. Study Selection: Studies were included if they were randomized controlled or prospective cohort trials, contained a population of competitive basketball athletes, and reported lower extremity injury incidence rates specific to basketball players. In total, 426 individual studies were identified. Of these, 9 met the inclusion criteria. One other study was found during a hand search of the literature, resulting in 10 total studies included in this meta-analysis. Study Design: Systematic review and meta-analysis. Level of Evidence: Level 2. Data Extraction: Details of the intervention (eg, neuromuscular vs external support), size of control and intervention groups, and number of injuries in each group were extracted from each study. Injury data were classified into 3 groups based on the anatomic diagnosis reported (general lower extremity injury, ankle sprain, ACL rupture). Results: Meta-analyses were performed independently for each injury classification. Results indicate that prophylactic programs significantly reduced the incidence of general lower extremity injuries (odds ratio [OR], 0.69; 95% CI, 0.57-0.85; P basketball athletes. Conclusion: In basketball players, prophylactic programs may be effective in reducing the risk of general lower extremity injuries and ankle sprains, yet not ACL injuries. PMID:26502412

  19. From entanglement witness to generalized Catalan numbers

    Science.gov (United States)

    Cohen, E.; Hansen, T.; Itzhaki, N.

    2016-07-01

    Being extremely important resources in quantum information and computation, it is vital to efficiently detect and properly characterize entangled states. We analyze in this work the problem of entanglement detection for arbitrary spin systems. It is demonstrated how a single measurement of the squared total spin can probabilistically discern separable from entangled many-particle states. For achieving this goal, we construct a tripartite analogy between the degeneracy of entanglement witness eigenstates, tensor products of SO(3) representations and classical lattice walks with special constraints. Within this framework, degeneracies are naturally given by generalized Catalan numbers and determine the fraction of states that are decidedly entangled and also known to be somewhat protected against decoherence. In addition, we introduce the concept of a “sterile entanglement witness”, which for large enough systems detects entanglement without affecting much the system’s state. We discuss when our proposed entanglement witness can be regarded as a sterile one.

  20. Economics of extreme weather events: Terminology and regional impact models

    OpenAIRE

    Jahn, Malte

    2015-01-01

    Impacts of extreme weather events are relevant for regional (in the sense of subnational) economies and in particular cities in many aspects. Cities are the cores of economic activity and the amount of people and assets endangered by extreme weather events is large, even under the current climate. A changing climate with changing extreme weather patterns and the process of urbanization will make the whole issue even more relevant in the future. In this paper, definitions and terminology in th...

  1. Extreme Programming Pocket Guide

    CERN Document Server

    Chromatic

    2003-01-01

    Extreme Programming (XP) is a radical new approach to software development that has been accepted quickly because its core practices--the need for constant testing, programming in pairs, inviting customer input, and the communal ownership of code--resonate with developers everywhere. Although many developers feel that XP is rooted in commonsense, its vastly different approach can bring challenges, frustrations, and constant demands on your patience. Unless you've got unlimited time (and who does these days?), you can't always stop to thumb through hundreds of pages to find the piece of info

  2. Upper extremity golf injuries.

    Science.gov (United States)

    Cohn, Michael A; Lee, Steven K; Strauss, Eric J

    2013-01-01

    Golf is a global sport enjoyed by an estimated 60 million people around the world. Despite the common misconception that the risk of injury during the play of golf is minimal, golfers are subject to a myriad of potential pathologies. While the majority of injuries in golf are attributable to overuse, acute traumatic injuries can also occur. As the body's direct link to the golf club, the upper extremities are especially prone to injury. A thorough appreciation of the risk factors and patterns of injury will afford accurate diagnosis, treatment, and prevention of further injury.

  3. Pushing the Envelope of Extreme Space Weather

    Science.gov (United States)

    Pesnell, W. D.

    2014-12-01

    Extreme Space Weather events are large solar flares or geomagnetic storms, which can cost billions of dollars to recover from. We have few examples of such events; the Carrington Event (the solar superstorm) is one of the few that had superlatives in three categories: size of solar flare, drop in Dst, and amplitude of aa. Kepler observations show that stars similar to the Sun can have flares releasing millions of times more energy than an X-class flare. These flares and the accompanying coronal mass ejections could strongly affect the atmosphere surrounding a planet. What level of solar activity would be necessary to strongly affect the atmosphere of the Earth? Can we map out the envelope of space weather along the evolution of the Sun? What would space weather look like if the Sun stopped producing a magnetic field? To what extreme should Space Weather go? These are the extremes of Space Weather explored in this talk.

  4. Attribution of extreme rainfall from Hurricane Harvey, August 2017

    NARCIS (Netherlands)

    Van Oldenborgh, Geert Jan; Van Der Wiel, Karin; Sebastian, A.G.; Singh, Roop; Arrighi, Julie; Otto, Friederike; Haustein, Karsten; Li, Sihan; Vecchi, Gabriel; Cullen, Heidi

    2017-01-01

    During August 25-30, 2017, Hurricane Harvey stalled over Texas and caused extreme precipitation, particularly over Houston and the surrounding area on August 26-28. This resulted in extensive flooding with over 80 fatalities and large economic costs. It was an extremely rare event: the return

  5. Extreme value theory and statistics for heavy tail data

    NARCIS (Netherlands)

    S. Caserta; C.G. de Vries (Casper)

    2003-01-01

    textabstractA scientific way of looking beyond the worst-case return is to employ statistical extreme value methods. Extreme Value Theory (EVT) shows that the probability on very large losses is eventually governed by a simple function, regardless the specific distribution that underlies the return

  6. Estimation of extreme risk regions under multivariate regular variation

    NARCIS (Netherlands)

    Cai, J.; Einmahl, J.H.J.; de Haan, L.F.M.

    2011-01-01

    When considering d possibly dependent random variables, one is often interested in extreme risk regions, with very small probability p. We consider risk regions of the form {z ∈ Rd : f (z) ≤ β}, where f is the joint density and β a small number. Estimation of such an extreme risk region is difficult

  7. Variability of extreme wet events over Malawi

    Directory of Open Access Journals (Sweden)

    Libanda Brigadier

    2017-01-01

    Full Text Available Adverse effects of extreme wet events are well documented by several studies around the world. These effects are exacerbated in developing countries like Malawi that have insufficient risk reduction strategies and capacity to cope with extreme wet weather. Ardent monitoring of the variability of extreme wet events over Malawi is therefore imperative. The use of the Expert Team on Climate Change Detection and Indices (ETCCDI has been recommended by many studies as an effective way of quantifying extreme wet events. In this study, ETCCDI indices were used to examine the number of heavy, very heavy, and extremely heavy rainfall days; daily and five-day maximum rainfall; very wet and extremely wet days; annual wet days and simple daily intensity. The Standard Normal Homogeneity Test (SNHT was employed at 5% significance level before any statistical test was done. Trend analysis was done using the nonparametric Mann-Kendall statistical test. All stations were found to be homogeneous apart from Mimosa. Trend results show high temporal and spatial variability with the only significant results being: increase in daily maximum rainfall (Rx1day over Karonga and Bvumbwe, increase in five-day maximum rainfall (Rx5day over Bvumbwe. Mzimba and Chileka recorded a significant decrease in very wet days (R95p while a significant increase was observed over Thyolo. Chileka was the only station which observed a significant trend (decrease in extremely wet rainfall (R99p. Mzimba was the only station that reported a significant trend (decrease in annual wet-day rainfall total (PRCPTOT and Thyolo was the only station that reported a significant trend (increase in simple daily intensity (SDII. Furthermore, the findings of this study revealed that, during wet years, Malawi is characterised by an anomalous convergence of strong south-easterly and north-easterly winds. This convergence is the main rain bringing mechanism to Malawi.

  8. Neurodevelopmental problems and extremes in BMI

    Directory of Open Access Journals (Sweden)

    Nóra Kerekes

    2015-07-01

    Full Text Available Background. Over the last few decades, an increasing number of studies have suggested a connection between neurodevelopmental problems (NDPs and body mass index (BMI. Attention deficit/hyperactivity disorder (ADHD and autism spectrum disorders (ASD both seem to carry an increased risk for developing extreme BMI. However, the results are inconsistent, and there have been only a few studies of the general population of children.Aims. We had three aims with the present study: (1 to define the prevalence of extreme (low or high BMI in the group of children with ADHD and/or ASDs compared to the group of children without these NDPs; (2 to analyze whether extreme BMI is associated with the subdomains within the diagnostic categories of ADHD or ASD; and (3 to investigate the contribution of genetic and environmental factors to BMI in boys and girls at ages 9 and 12.Method. Parents of 9- or 12-year-old twins (n = 12,496 were interviewed using the Autism—Tics, ADHD and other Comorbidities (A-TAC inventory as part of the Child and Adolescent Twin Study in Sweden (CATSS. Univariate and multivariate generalized estimated equation models were used to analyze associations between extremes in BMI and NDPs.Results. ADHD screen-positive cases followed BMI distributions similar to those of children without ADHD or ASD. Significant association was found between ADHD and BMI only among 12-year-old girls, where the inattention subdomain of ADHD was significantly associated with the high extreme BMI. ASD scores were associated with both the low and the high extremes of BMI. Compared to children without ADHD or ASD, the prevalence of ASD screen-positive cases was three times greater in the high extreme BMI group and double as much in the low extreme BMI group. Stereotyped and repetitive behaviors were significantly associated with high extreme BMIs.Conclusion. Children with ASD, with or without coexisting ADHD, are more prone to have low or high extreme BMIs than

  9. More Sets, Graphs and Numbers

    CERN Document Server

    Gyori, Ervin; Lovasz, Laszlo

    2006-01-01

    This volume honours the eminent mathematicians Vera Sos and Andras Hajnal. The book includes survey articles reviewing classical theorems, as well as new, state-of-the-art results. Also presented are cutting edge expository research papers with new theorems and proofs in the area of the classical Hungarian subjects, like extremal combinatorics, colorings, combinatorial number theory, etc. The open problems and the latest results in the papers are sure to inspire further research.

  10. Characterizing the Spatial Contiguity of Extreme Precipitation over the US in the Recent Past

    Science.gov (United States)

    Touma, D. E.; Swain, D. L.; Diffenbaugh, N. S.

    2016-12-01

    The spatial characteristics of extreme precipitation over an area can define the hydrologic response in a basin, subsequently affecting the flood risk in the region. Here, we examine the spatial extent of extreme precipitation in the US by defining its "footprint": a contiguous area of rainfall exceeding a certain threshold (e.g., 90th percentile) on a given day. We first characterize the climatology of extreme rainfall footprint sizes across the US from 1980-2015 using Daymet, a high-resolution observational gridded rainfall dataset. We find that there are distinct regional and seasonal differences in average footprint sizes of extreme daily rainfall. In the winter, the Midwest shows footprints exceeding 500,000 sq. km while the Front Range exhibits footprints of 10,000 sq. km. Alternatively, the summer average footprint size is generally smaller and more uniform across the US, ranging from 10,000 sq. km in the Southwest to 100,000 sq. km in Montana and North Dakota. Moreover, we find that there are some significant increasing trends of average footprint size between 1980-2015, specifically in the Southwest in the winter and the Northeast in the spring. While gridded daily rainfall datasets allow for a practical framework in calculating footprint size, this calculation heavily depends on the interpolation methods that have been used in creating the dataset. Therefore, we assess footprint size using the GHCN-Daily station network and use geostatistical methods to define footprints of extreme rainfall directly from station data. Compared to the findings from Daymet, preliminary results using this method show fewer small daily footprint sizes over the US while large footprints are of similar number and magnitude to Daymet. Overall, defining the spatial characteristics of extreme rainfall as well as observed and expected changes in these characteristics allows us to better understand the hydrologic response to extreme rainfall and how to better characterize flood

  11. Genomics of an extreme psychrophile, Psychromonas ingrahamii

    Directory of Open Access Journals (Sweden)

    Hauser Loren J

    2008-05-01

    Full Text Available Abstract Background The genome sequence of the sea-ice bacterium Psychromonas ingrahamii 37, which grows exponentially at -12C, may reveal features that help to explain how this extreme psychrophile is able to grow at such low temperatures. Determination of the whole genome sequence allows comparison with genes of other psychrophiles and mesophiles. Results Correspondence analysis of the composition of all P. ingrahamii proteins showed that (1 there are 6 classes of proteins, at least one more than other bacteria, (2 integral inner membrane proteins are not sharply separated from bulk proteins suggesting that, overall, they may have a lower hydrophobic character, and (3 there is strong opposition between asparagine and the oxygen-sensitive amino acids methionine, arginine, cysteine and histidine and (4 one of the previously unseen clusters of proteins has a high proportion of "orphan" hypothetical proteins, raising the possibility these are cold-specific proteins. Based on annotation of proteins by sequence similarity, (1 P. ingrahamii has a large number (61 of regulators of cyclic GDP, suggesting that this bacterium produces an extracellular polysaccharide that may help sequester water or lower the freezing point in the vicinity of the cell. (2 P. ingrahamii has genes for production of the osmolyte, betaine choline, which may balance the osmotic pressure as sea ice freezes. (3 P. ingrahamii has a large number (11 of three-subunit TRAP systems that may play an important role in the transport of nutrients into the cell at low temperatures. (4 Chaperones and stress proteins may play a critical role in transforming nascent polypeptides into 3-dimensional configurations that permit low temperature growth. (5 Metabolic properties of P. ingrahamii were deduced. Finally, a few small sets of proteins of unknown function which may play a role in psychrophily have been singled out as worthy of future study. Conclusion The results of this genomic analysis

  12. Detection and attribution of extreme weather disasters

    Science.gov (United States)

    Huggel, Christian; Stone, Dáithí; Hansen, Gerrit

    2014-05-01

    Single disasters related to extreme weather events have caused loss and damage on the order of up to tens of billions US dollars over the past years. Recent disasters fueled the debate about whether and to what extent these events are related to climate change. In international climate negotiations disaster loss and damage is now high on the agenda, and related policy mechanisms have been discussed or are being implemented. In view of funding allocation and effective risk reduction strategies detection and attribution to climate change of extreme weather events and disasters is a key issue. Different avenues have so far been taken to address detection and attribution in this context. Physical climate sciences have developed approaches, among others, where variables that are reasonably sampled over climatically relevant time periods and related to the meteorological characteristics of the extreme event are examined. Trends in these variables (e.g. air or sea surface temperatures) are compared between observations and climate simulations with and without anthropogenic forcing. Generally, progress has been made in recent years in attribution of changes in the chance of some single extreme weather events to anthropogenic climate change but there remain important challenges. A different line of research is primarily concerned with losses related to the extreme weather events over time, using disaster databases. A growing consensus is that the increase in asset values and in exposure are main drivers of the strong increase of economic losses over the past several decades, and only a limited number of studies have found trends consistent with expectations from climate change. Here we propose a better integration of existing lines of research in detection and attribution of extreme weather events and disasters by applying a risk framework. Risk is thereby defined as a function of the probability of occurrence of an extreme weather event, and the associated consequences

  13. DIRECTIONS OF EXTREME TOURISM IN UKRAINE

    Directory of Open Access Journals (Sweden)

    L. V. Martseniuk

    2016-02-01

    Full Text Available Purpose. In the world market of tourist services the extreme tourism is very popular, as it does not require the significant financial costs and enables year on year to increase the offers of holiday packages, associated with active travel. Ukraine has significant potential for the development of extreme kinds of rest, but it is not developed enough. Forms of extreme tourism are unknown for domestic tourists, and therefore, they formed a negative attitude. The aim of the article is the analysis of extreme resort potential of Ukraine and promotion of the development of extreme tourism destinations in the travel market. Theoretical and methodological basis of research is the system analysis of the problems of ensuring the competitiveness of the tourism industry, theoretical principles of economic science in the field of the effectiveness of extreme tourism and management of tourist flows. Methodology. The author offers the directions of tourist flows control, which differ from the current expansion of services to tourists in Ukraine. The development of extreme tourism with the help of co-operation of railways and sport federations was proposed. Findings. During the research the author proved that the implementation of the tasks will be promote: 1 increase in budget revenues at all levels of the inner extreme tourism; 2 raise the image of Ukraine and Ukrainian Railways; 3 increase the share of tourism and resorts in the gross domestic product to the level of developed countries; 4 bringing the number of employees in tourism and resorts to the level of developed countries; 5 the creation of an effective system of monitoring the quality of tourist services; 6 the creation of an attractive investment climate for attracting the investment in the broad development of tourism, engineering and transport and municipal infrastructure; 7 improvement the safety of tourists, ensure the effective protection of their rights and legitimate interests and

  14. Extreme morphologies of mantis shrimp larvae

    OpenAIRE

    Haug, Carolin; Ahyong, Shane T.; Wiethase, Joris H.; Olesen, Jørgen; Haug, Joachim T.

    2016-01-01

    ABSTRACT Larvae of stomatopods (mantis shrimps) are generally categorized into four larval types: antizoea, pseudozoea (both representing early larval stages), alima and erichthus (the latter two representing later larval stages). These categories, however, do not reflect the existing morphological diversity of stomatopod larvae, which is largely unstudied. We describe here four previously unknown larval types with extreme morphologies. All specimens were found in the collections of the Zoolo...

  15. A note on extreme sets

    Directory of Open Access Journals (Sweden)

    Radosław Cymer

    2017-10-01

    Full Text Available In decomposition theory, extreme sets have been studied extensively due to its connection to perfect matchings in a graph. In this paper, we first define extreme sets with respect to degree-matchings and next investigate some of their properties. In particular, we prove the generalized Decomposition Theorem and give a characterization for the set of all extreme vertices in a graph.

  16. Earthquake number forecasts testing

    Science.gov (United States)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

  17. Climate change, climatic variation and extreme biological responses.

    Science.gov (United States)

    Palmer, Georgina; Platts, Philip J; Brereton, Tom; Chapman, Jason W; Dytham, Calvin; Fox, Richard; Pearce-Higgins, James W; Roy, David B; Hill, Jane K; Thomas, Chris D

    2017-06-19

    Extreme climatic events could be major drivers of biodiversity change, but it is unclear whether extreme biological changes are (i) individualistic (species- or group-specific), (ii) commonly associated with unusual climatic events and/or (iii) important determinants of long-term population trends. Using population time series for 238 widespread species (207 Lepidoptera and 31 birds) in England since 1968, we found that population 'crashes' (outliers in terms of species' year-to-year population changes) were 46% more frequent than population 'explosions'. (i) Every year, at least three species experienced extreme changes in population size, and in 41 of the 44 years considered, some species experienced population crashes while others simultaneously experienced population explosions. This suggests that, even within the same broad taxonomic groups, species are exhibiting individualistic dynamics, most probably driven by their responses to different, short-term events associated with climatic variability. (ii) Six out of 44 years showed a significant excess of species experiencing extreme population changes (5 years for Lepidoptera, 1 for birds). These 'consensus years' were associated with climatically extreme years, consistent with a link between extreme population responses and climatic variability, although not all climatically extreme years generated excess numbers of extreme population responses. (iii) Links between extreme population changes and long-term population trends were absent in Lepidoptera and modest (but significant) in birds. We conclude that extreme biological responses are individualistic, in the sense that the extreme population changes of most species are taking place in different years, and that long-term trends of widespread species have not, to date, been dominated by these extreme changes.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'. © 2017 The Authors.

  18. Options with Extreme Strikes

    Directory of Open Access Journals (Sweden)

    Lingjiong Zhu

    2015-07-01

    Full Text Available In this short paper, we study the asymptotics for the price of call options for very large strikes and put options for very small strikes. The stock price is assumed to follow the Black–Scholes models. We analyze European, Asian, American, Parisian and perpetual options and conclude that the tail asymptotics for these option types fall into four scenarios.

  19. Prandtl-number Effects in High-Rayleigh-number Spherical Convection

    Science.gov (United States)

    Orvedahl, Ryan J.; Calkins, Michael A.; Featherstone, Nicholas A.; Hindman, Bradley W.

    2018-03-01

    Convection is the predominant mechanism by which energy and angular momentum are transported in the outer portion of the Sun. The resulting overturning motions are also the primary energy source for the solar magnetic field. An accurate solar dynamo model therefore requires a complete description of the convective motions, but these motions remain poorly understood. Studying stellar convection numerically remains challenging; it occurs within a parameter regime that is extreme by computational standards. The fluid properties of the convection zone are characterized in part by the Prandtl number \\Pr = ν/κ, where ν is the kinematic viscosity and κ is the thermal diffusion; in stars, \\Pr is extremely low, \\Pr ≈ 10‑7. The influence of \\Pr on the convective motions at the heart of the dynamo is not well understood since most numerical studies are limited to using \\Pr ≈ 1. We systematically vary \\Pr and the degree of thermal forcing, characterized through a Rayleigh number, to explore its influence on the convective dynamics. For sufficiently large thermal driving, the simulations reach a so-called convective free-fall state where diffusion no longer plays an important role in the interior dynamics. Simulations with a lower \\Pr generate faster convective flows and broader ranges of scales for equivalent levels of thermal forcing. Characteristics of the spectral distribution of the velocity remain largely insensitive to changes in \\Pr . Importantly, we find that \\Pr plays a key role in determining when the free-fall regime is reached by controlling the thickness of the thermal boundary layer.

  20. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.

    2015-01-01

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.