WorldWideScience

Sample records for large-scale structural testing

  1. Large Scale Testing of Drystone Retaining Structures

    OpenAIRE

    Mundell, Chris

    2009-01-01

    Drystone walls have been used extensively around the world as earth retaining structures wherever suitable stone is found. Commonly about 0.6m thick (irrespective of height), there are about 9000km of drystone retaining walls on the UK road network alone, mostly built in the 19th and early 20th centuries, with an estimated replacement value in excess of £1 billion[1]. Drystone wall design is traditionally empirical, based on local knowledge of what has worked in the past. Methods vary from re...

  2. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  3. Testing Inflation with Large Scale Structure: Connecting Hopes with Reality

    International Nuclear Information System (INIS)

    Alvarez, Marcello; Baldauf, T.; Bond, J. Richard; Dalal, N.; Putter, R. D.; Dore, O.; Green, Daniel; Hirata, Chris; Huang, Zhiqi; Huterer, Dragan; Jeong, Donghui; Johnson, Matthew C.; Krause, Elisabeth; Loverde, Marilena; Meyers, Joel; Meeburg, Daniel; Senatore, Leonardo; Shandera, Sarah; Silverstein, Eva; Slosar, Anze; Smith, Kendrick; Zaldarriaga, Matias; Assassi, Valentin; Braden, Jonathan; Hajian, Amir; Kobayashi, Takeshi; Stein, George; Engelen, Alexander van

    2014-01-01

    The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large-scale structure is, however, from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude floc\

  4. Testing Inflation with Large Scale Structure: Connecting Hopes with Reality

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez, Marcello [Univ. of Toronto, ON (Canada); Baldauf, T. [Inst. of Advanced Studies, Princeton, NJ (United States); Bond, J. Richard [Univ. of Toronto, ON (Canada); Canadian Inst. for Advanced Research, Toronto, ON (Canada); Dalal, N. [Univ. of Illinois, Urbana-Champaign, IL (United States); Putter, R. D. [Jet Propulsion Lab., Pasadena, CA (United States); California Inst. of Technology (CalTech), Pasadena, CA (United States); Dore, O. [Jet Propulsion Lab., Pasadena, CA (United States); California Inst. of Technology (CalTech), Pasadena, CA (United States); Green, Daniel [Univ. of Toronto, ON (Canada); Canadian Inst. for Advanced Research, Toronto, ON (Canada); Hirata, Chris [The Ohio State Univ., Columbus, OH (United States); Huang, Zhiqi [Univ. of Toronto, ON (Canada); Huterer, Dragan [Univ. of Michigan, Ann Arbor, MI (United States); Jeong, Donghui [Pennsylvania State Univ., University Park, PA (United States); Johnson, Matthew C. [York Univ., Toronto, ON (Canada); Perimeter Inst., Waterloo, ON (Canada); Krause, Elisabeth [Stanford Univ., CA (United States); Loverde, Marilena [Univ. of Chicago, IL (United States); Meyers, Joel [Univ. of Toronto, ON (Canada); Meeburg, Daniel [Univ. of Toronto, ON (Canada); Senatore, Leonardo [Stanford Univ., CA (United States); Shandera, Sarah [Pennsylvania State Univ., University Park, PA (United States); Silverstein, Eva [Stanford Univ., CA (United States); Slosar, Anze [Brookhaven National Lab. (BNL), Upton, NY (United States); Smith, Kendrick [Perimeter Inst., Waterloo, Toronto, ON (Canada); Zaldarriaga, Matias [Univ. of Toronto, ON (Canada); Assassi, Valentin [Cambridge Univ. (United Kingdom); Braden, Jonathan [Univ. of Toronto, ON (Canada); Hajian, Amir [Univ. of Toronto, ON (Canada); Kobayashi, Takeshi [Perimeter Inst., Waterloo, Toronto, ON (Canada); Univ. of Toronto, ON (Canada); Stein, George [Univ. of Toronto, ON (Canada); Engelen, Alexander van [Univ. of Toronto, ON (Canada)

    2014-12-15

    The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large-scale structure is, however, from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude f$loc\\atop{NL}$ (f$eq\\atop{NL}$), natural target levels of sensitivity are Δf$loc, eq\\atop{NL}$ ≃ 1. We highlight that such levels are within reach of future surveys by measuring 2-, 3- and 4-point statistics of the galaxy spatial distribution. This paper summarizes a workshop held at CITA (University of Toronto) on October 23-24, 2014.

  5. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  6. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  7. Large-scale seismic test for soil-structure interaction research in Hualien, Taiwan

    International Nuclear Information System (INIS)

    Ueshima, T.; Kokusho, T.; Okamoto, T.

    1995-01-01

    It is important to evaluate dynamic soil-structure interaction more accurately in the aseismic design of important facilities such as nuclear power plants. A large-scale model structure with about 1/4th of commercial nuclear power plants was constructed on the gravelly layers in seismically active Hualien, Taiwan. This international joint project is called 'the Hualien LSST Project', where 'LSST' is short for Large-Scale Seismic Test. In this paper, research tasks and responsibilities, the process of the construction work and research tasks along the time-line, main results obtained up to now, and so on in this Project are described. (J.P.N.)

  8. A European collaboration research programme to study and test large scale base isolated structures

    International Nuclear Information System (INIS)

    Renda, V.; Verzeletti, G.; Papa, L.

    1995-01-01

    The improvement of the technology of innovative anti-seismic mechanisms, as those for base isolation and energy dissipation, needs of testing capability for large scale models of structures integrated with these mechanisms. These kind experimental tests are of primary importance for the validation of design rules and the setting up of an advanced earthquake engineering for civil constructions of relevant interest. The Joint Research Centre of the European Commission offers the European Laboratory for Structural Assessment located at Ispra - Italy, as a focal point for an international european collaboration research programme to test large scale models of structure making use of innovative anti-seismic mechanisms. A collaboration contract, opened to other future contributions, has been signed with the national italian working group on seismic isolation (Gruppo di Lavoro sull's Isolamento Sismico GLIS) which includes the national research centre ENEA, the national electricity board ENEL, the industrial research centre ISMES and producer of isolators ALGA. (author). 3 figs

  9. TOPOLOGY OF A LARGE-SCALE STRUCTURE AS A TEST OF MODIFIED GRAVITY

    International Nuclear Information System (INIS)

    Wang Xin; Chen Xuelei; Park, Changbom

    2012-01-01

    The genus of the isodensity contours is a robust measure of the topology of a large-scale structure, and it is relatively insensitive to nonlinear gravitational evolution, galaxy bias, and redshift-space distortion. We show that the growth of density fluctuations is scale dependent even in the linear regime in some modified gravity theories, which opens a new possibility of testing the theories observationally. We propose to use the genus of the isodensity contours, an intrinsic measure of the topology of the large-scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity, density fluctuations grow at the same rate on all scales in the linear regime, and the genus per comoving volume is almost conserved as structures grow homologously, so we expect that the genus-smoothing-scale relation is basically time independent. However, in some modified gravity models where structures grow with different rates on different scales, the genus-smoothing-scale relation should change over time. This can be used to test the gravity models with large-scale structure observations. We study the cases of the f(R) theory, DGP braneworld theory as well as the parameterized post-Friedmann models. We also forecast how the modified gravity models can be constrained with optical/IR or redshifted 21 cm radio surveys in the near future.

  10. Testing the Big Bang: Light elements, neutrinos, dark matter and large-scale structure

    Science.gov (United States)

    Schramm, David N.

    1991-01-01

    Several experimental and observational tests of the standard cosmological model are examined. In particular, a detailed discussion is presented regarding: (1) nucleosynthesis, the light element abundances, and neutrino counting; (2) the dark matter problems; and (3) the formation of galaxies and large-scale structure. Comments are made on the possible implications of the recent solar neutrino experimental results for cosmology. An appendix briefly discusses the 17 keV thing and the cosmological and astrophysical constraints on it.

  11. Testing the big bang: Light elements, neutrinos, dark matter and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Schramm, D.N. (Chicago Univ., IL (United States) Fermi National Accelerator Lab., Batavia, IL (United States))

    1991-06-01

    In this series of lectures, several experimental and observational tests of the standard cosmological model are examined. In particular, detailed discussion is presented regarding nucleosynthesis, the light element abundances and neutrino counting; the dark matter problems; and the formation of galaxies and large-scale structure. Comments will also be made on the possible implications of the recent solar neutrino experimental results for cosmology. An appendix briefly discusses the 17 keV thing'' and the cosmological and astrophysical constraints on it. 126 refs., 8 figs., 2 tabs.

  12. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  13. The Hualien Large-Scale Seismic Test for soil-structure interaction research

    International Nuclear Information System (INIS)

    Tang, H.T.; Stepp, J.C.; Cheng, Y.H.

    1991-01-01

    A Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, has been initiated with the primary objective of obtaining earthquake-induced SSI data at a stiff soil site having similar prototypical nuclear power plant soil conditions. Preliminary soil boring, geophysical testing and ambient and earthquake-induced ground motion monitoring have been conducted to understand the experiment site conditions. More refined field and laboratory tests will be conducted such as the state-of-the-art freezing sampling technique and the large penetration test (LPT) method to characterize the soil constitutive behavior. The test model to be constructed will be similar to the Lotung model. The instrumentation layout will be designed to provide data for studies of SSI, spatial incoherence, soil stability, foundation uplifting, ground motion wave field and structural response. A consortium consisting of EPRI, Taipower, CRIEPI, TEPCO, CEA, EdF and Framatome has been established to carry out the project. It is envisaged that the Hualien SSI array will be ready to record earthquakes by the middle of 1992. The duration of the recording scheduled for five years. (author)

  14. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  15. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  16. Single-field consistency relations of large scale structure part III: test of the equivalence principle

    Energy Technology Data Exchange (ETDEWEB)

    Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Trieste, 34151 (Italy); Gleyzes, Jérôme; Vernizzi, Filippo [CEA, Institut de Physique Théorique, Gif-sur-Yvette cédex, F-91191 France (France); Hui, Lam [Physics Department and Institute for Strings, Cosmology and Astroparticle Physics, Columbia University, New York, NY, 10027 (United States); Simonović, Marko, E-mail: creminel@ictp.it, E-mail: jerome.gleyzes@cea.fr, E-mail: lhui@astro.columbia.edu, E-mail: msimonov@sissa.it, E-mail: filippo.vernizzi@cea.fr [SISSA, via Bonomea 265, Trieste, 34136 (Italy)

    2014-06-01

    The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a very tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.

  17. Seismic tests of a pile-supported structure in liquefiable sand using large-scale blast excitation

    International Nuclear Information System (INIS)

    Kamijo, Naotaka; Saito, Hideaki; Kusama, Kazuhiro; Kontani, Osamu; Nigbor, Robert

    2004-01-01

    Extensive, large-amplitude vibration tests of a pile-supported structure in a liquefiable sand deposit have been performed at a large-scale mining site. Ground motions from large-scale blasting operations were used as excitation forces for vibration tests. A simple pile-supported structure was constructed in an excavated 3 m-deep pit. The test pit was backfilled with 100% water-saturated clean uniform sand. Accelerations were measured on the pile-supported structure, in the sand in the test pit, and in the adjacent free field. Excess pore water pressures in the test pit and strains of one pile were also measured. Vibration tests were performed with six different levels of input motions. The maximum horizontal acceleration recorded at the adjacent ground surface varied from 20 Gals to 1353 Gals. These alternations of acceleration provided different degrees of liquefaction in the test pit. Sand boiling phenomena were observed in the test pit with larger input motions. This paper outlines vibration tests and investigates the test results

  18. Structural safety of HDR reactor building during large scale vibration tests

    International Nuclear Information System (INIS)

    Stangenberg, F.; Zinn, R.

    1985-01-01

    In the second phase of the HDR investigations, a high shaker excitation of the building is planned using a large shaker which will be located on the operating floor and will be brought up to speed in a balanced condition and then unbalanced and decoupled from the drive system. With decreasing speed the shaker comes in resonance with the building frequencies and its energy is transferred to the building. In this paper the structural safety of the reactor building during the projected shaker tests is analysed. Dynamic response calculations with coupling between building and shaker by simultaneously integrating the equilibrium equations of both building and shaker are presented. The resulting building stresses, soil pressures etc. are compared with allowable values. (orig.)

  19. Soil-structure interaction analysis of large scale seismic test model at Hualien in Taiwan

    International Nuclear Information System (INIS)

    Jang, J. B.; Ser, Y. P.; Lee, J. L.

    2001-01-01

    The issue of SSI in seismic analysis and design of NPPs is getting important, as it may be inevitable to build NPPs at sites with soft foundation due to ever-increasing difficulty in acquiring new construction sites for NPPs. And, the improvement of seismic analysis technique including soil-structure interaction analysis essential to achieve reasonable seismic design for structures and equipments, etc. of NPPs. Therefore, among the existing SSI analysis programs, the most prevalent SASSI is verified through the comparison numerical analysis results with recorded response results of Hualien project in this study. As a result, SASSI accurately estimated the recorded response results for the fundamental frequency and peak acceleration of structure and was proved to be reliable and useful for the seismic analysis and design of NPPs

  20. Testing the statistical isotropy of large scale structure with multipole vectors

    International Nuclear Information System (INIS)

    Zunckel, Caroline; Huterer, Dragan; Starkman, Glenn D.

    2011-01-01

    A fundamental assumption in cosmology is that of statistical isotropy - that the Universe, on average, looks the same in every direction in the sky. Statistical isotropy has recently been tested stringently using cosmic microwave background data, leading to intriguing results on large angular scales. Here we apply some of the same techniques used in the cosmic microwave background to the distribution of galaxies on the sky. Using the multipole vector approach, where each multipole in the harmonic decomposition of galaxy density field is described by unit vectors and an amplitude, we lay out the basic formalism of how to reconstruct the multipole vectors and their statistics out of galaxy survey catalogs. We apply the algorithm to synthetic galaxy maps, and study the sensitivity of the multipole vector reconstruction accuracy to the density, depth, sky coverage, and pixelization of galaxy catalog maps.

  1. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  2. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  3. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  4. The origin of large scale cosmic structure

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Palmer, P.L.

    1985-01-01

    The paper concerns the origin of large scale cosmic structure. The evolution of density perturbations, the nonlinear regime (Zel'dovich's solution and others), the Gott and Rees clustering hierarchy, the spectrum of condensations, and biassed galaxy formation, are all discussed. (UK)

  5. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  6. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  7. Conditional sampling technique to test the applicability of the Taylor hypothesis for the large-scale coherent structures

    Science.gov (United States)

    Hussain, A. K. M. F.

    1980-01-01

    Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.

  8. Testing Einstein's Gravity on Large Scales

    Science.gov (United States)

    Prescod-Weinstein, Chandra

    2011-01-01

    A little over a decade has passed since two teams studying high redshift Type Ia supernovae announced the discovery that the expansion of the universe was accelerating. After all this time, we?re still not sure how cosmic acceleration fits into the theory that tells us about the large-scale universe: General Relativity (GR). As part of our search for answers, we have been forced to question GR itself. But how will we test our ideas? We are fortunate enough to be entering the era of precision cosmology, where the standard model of gravity can be subjected to more rigorous testing. Various techniques will be employed over the next decade or two in the effort to better understand cosmic acceleration and the theory behind it. In this talk, I will describe cosmic acceleration, current proposals to explain it, and weak gravitational lensing, an observational effect that allows us to do the necessary precision cosmology.

  9. Neutrinos and large-scale structure

    International Nuclear Information System (INIS)

    Eisenstein, Daniel J.

    2015-01-01

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos

  10. Puzzles of large scale structure and gravitation

    International Nuclear Information System (INIS)

    Sidharth, B.G.

    2006-01-01

    We consider the puzzle of cosmic voids bounded by two-dimensional structures of galactic clusters as also a puzzle pointed out by Weinberg: How can the mass of a typical elementary particle depend on a cosmic parameter like the Hubble constant? An answer to the first puzzle is proposed in terms of 'Scaled' Quantum Mechanical like behaviour which appears at large scales. The second puzzle can be answered by showing that the gravitational mass of an elementary particle has a Machian character (see Ahmed N. Cantorian small worked, Mach's principle and the universal mass network. Chaos, Solitons and Fractals 2004;21(4))

  11. Neutrinos and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, Daniel J. [Daniel J. Eisenstein, Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS #20, Cambridge, MA 02138 (United States)

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  12. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  13. Testing for Measurement and Structural Equivalence in Large-Scale Cross-Cultural Studies: Addressing the Issue of Nonequivalence

    Science.gov (United States)

    Byrne, Barbara M.; van de Vijver, Fons J. R.

    2010-01-01

    A critical assumption in cross-cultural comparative research is that the instrument measures the same construct(s) in exactly the same way across all groups (i.e., the instrument is measurement and structurally equivalent). Structural equation modeling (SEM) procedures are commonly used in testing these assumptions of multigroup equivalence.…

  14. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  15. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  16. Ultrasonic Nondestructive Evaluation of Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) During Large-Scale Load Testing and Rod Push-Out Testing

    Science.gov (United States)

    Johnston, Patrick H.; Juarez, Peter D.

    2016-01-01

    The Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) is a structural concept developed by the Boeing Company to address the complex structural design aspects associated with a pressurized hybrid wing body (HWB) aircraft configuration. The HWB has long been a focus of NASA's environmentally responsible aviation (ERA) project, following a building block approach to structures development, culminating with the testing of a nearly full-scale multi-bay box (MBB), representing a segment of the pressurized, non-circular fuselage portion of the HWB. PRSEUS is an integral structural concept wherein skins, frames, stringers and tear straps made of variable number of layers of dry warp-knit carbon-fiber stacks are stitched together, then resin-infused and cured in an out-of-autoclave process. The PRSEUS concept has the potential for reducing the weight and cost and increasing the structural efficiency of transport aircraft structures. A key feature of PRSEUS is the damage-arresting nature of the stitches, which enables the use of fail-safe design principles. During the load testing of the MBB, ultrasonic nondestructive evaluation (NDE) was used to monitor several sites of intentional barely-visible impact damage (BVID) as well as to survey the areas surrounding the failure cracks after final loading to catastrophic failure. The damage-arresting ability of PRSEUS was confirmed by the results of NDE. In parallel with the large-scale structural testing of the MBB, mechanical tests were conducted of the PRSEUS rod-to-overwrap bonds, as measured by pushing the rod axially from a short length of stringer.

  17. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  18. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  19. EPFM verification by a large scale test

    International Nuclear Information System (INIS)

    Okamura, H.; Yagawa, G.; Hidaka, T.; Sato, M.; Urabe, Y.; Iida, M.

    1993-01-01

    Step B test was carried out as one of the elastic plastic fracture mechanics (EPFR) study in Japanese PTS integrity research project. In step B test bending load was applied to the large flat specimen with thermal shock. Tensile load was kept constant during the test. Estimated stable crack growth at the deepest point of the crack was 3 times larger than the experimental value in the previous analysis. In order to diminish the difference between them from the point of FEM modeling, more precise FEM mesh was introduced. According to the new analysis, the difference considerably decreased. That is, stable crack growth evaluation was improved by adopting precise FEM model near the crack tip and the difference was almost same order as that in the NKS4-1 test analysis by MPA. 8 refs., 17 figs., 5 tabs

  20. Underground large scale test facility for rocks

    International Nuclear Information System (INIS)

    Sundaram, P.N.

    1981-01-01

    This brief note discusses two advantages of locating the facility for testing rock specimens of large dimensions in an underground space. Such an environment can be made to contribute part of the enormous axial load and stiffness requirements needed to get complete stress-strain behavior. The high pressure vessel may also be located below the floor level since the lateral confinement afforded by the rock mass may help to reduce the thickness of the vessel

  1. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  2. Optimization of Large-Scale Structural Systems

    DEFF Research Database (Denmark)

    Jensen, F. M.

    solutions to small problems with one or two variables to the optimization of large structures such as bridges, ships and offshore structures. The methods used for salving these problems have evolved from being classical differential calculus and calculus of variation to very advanced numerical techniques...

  3. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Alexandrov; Kotov, V.; Mineev, M.; Roumiantsev, V.; Wolters, H.; Amorim, A.; Pedro, L.; Ribeiro, A.; Badescu, E.; Caprini, M.; Burckhart-Chromek, D.; Dobson, M.; Jones, R.; Kazarov, A.; Kolos, S.; Liko, D.; Lucio, L.; Mapelli, L.; Nassiakou, M.; Schweiger, D.; Soloviev, I.; Hart, R.; Ryabov, Y.; Moneta, L.

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  4. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    Logo of the Indian Academy of Sciences ... Hybrid inflation; Higgs scalar field; structure formation; curvation. ... We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which ... May 2018. Home · Volumes & Issues · Special Issues · Forthcoming Articles · Search · Editorial Board ...

  5. Responses in large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Barreira, Alexandre; Schmidt, Fabian, E-mail: barreira@MPA-Garching.MPG.DE, E-mail: fabians@MPA-Garching.MPG.DE [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-06-01

    We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and n soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a ''bias'' expansion of the local power spectrum, with a fixed number of physical response coefficients , which are only a function of the hard wavenumber k . Further, the responses up to n -th order completely describe the ( n +2)-point function in the squeezed limit, i.e. with two hard and n soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance Cov{sup NG}{sub ℓ=0}( k {sub 1}, k {sub 2}), in the limit where one of the modes, say k {sub 2}, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for k {sub 2} ∼< 0.06 h Mpc{sup −1}, and for any k {sub 1} ∼> 2 k {sub 2}. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.

  6. Responses in large-scale structure

    Science.gov (United States)

    Barreira, Alexandre; Schmidt, Fabian

    2017-06-01

    We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and n soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a ``bias'' expansion of the local power spectrum, with a fixed number of physical response coefficients, which are only a function of the hard wavenumber k. Further, the responses up to n-th order completely describe the (n+2)-point function in the squeezed limit, i.e. with two hard and n soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance CovNGl=0(k1,k2), in the limit where one of the modes, say k2, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for k2 lesssim 0.06 h Mpc-1, and for any k1 gtrsim 2k2. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.

  7. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  8. Cooling pipeline disposing structure for large-scaled cryogenic structure

    International Nuclear Information System (INIS)

    Takahashi, Hiroyuki.

    1996-01-01

    The present invention concerns an electromagnetic force supporting structure for superconductive coils. As the size of a cryogenic structure is increased, since it takes much cooling time, temperature difference between cooling pipelines and the cryogenic structure is increased over a wide range, and difference of heat shrinkage is increased to increase thermal stresses. Then, in the cooling pipelines for a large scaled cryogenic structure, the cooling pipelines and the structure are connected by way of a thin metal plate made of a material having a heat conductivity higher than that of the material of the structure by one digit or more, and the thin metal plate is bent. The displacement between the cryogenic structure and the cooling pipelines caused by heat shrinkage is absorbed by the elongation/shrinkage of the bent structure of the thin metal plate, and the thermal stresses due to the displacement is reduced. In addition, the heat of the cryogenic structures is transferred by way of the thin metal plate. Then, the cooling pipelines can be secured to the cryogenic structure such that cooling by heat transfer is enabled by absorbing a great deviation or three dimensional displacement due to the difference of the temperature distribution between the cryogenic structure enlarged in the scale and put into the three dimensional shape, and the cooling pipelines. (N.H.)

  9. Rock sealing - large scale field test and accessory investigations

    International Nuclear Information System (INIS)

    Pusch, R.

    1988-03-01

    The experience from the pilot field test and the basic knowledge extracted from the lab experiments have formed the basis of the planning of a Large Scale Field Test. The intention is to find out how the 'instrument of rock sealing' can be applied to a number of practical cases, where cutting-off and redirection of groundwater flow in repositories are called for. Five field subtests, which are integrated mutually or with other Stripa projects (3D), are proposed. One of them concerns 'near-field' sealing, i.e. sealing of tunnel floors hosting deposition holes, while two involve sealing of 'disturbed' rock around tunnels. The fourth concerns sealing of a natural fracture zone in the 3D area, and this latter test has the expected spin-off effect of obtaining additional information on the general flow pattern around the northeastern wing of the 3D cross. The fifth test is an option of sealing structures in the Validation Drift. The longevity of major grout types is focussed on as the most important part of the 'Accessory Investigations', and detailed plans have been worked out for that purpose. It is foreseen that the continuation of the project, as outlined in this report, will yield suitable methods and grouts for effective and long-lasting sealing of rock for use at stategic points in repositories. (author)

  10. Large-scale structure observables in general relativity

    International Nuclear Information System (INIS)

    Jeong, Donghui; Schmidt, Fabian

    2015-01-01

    We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)

  11. Large-Scale Seismic Test Program at Hualien, Taiwan

    International Nuclear Information System (INIS)

    Tang, H.T.; Graves, H.L.; Yeh, Y.S.

    1991-01-01

    The Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, is a follow-on to the soil-structure interaction (SSI) experiments at Lotung, Taiwan. The planned SSI studies will be performed at a stiff soil site in Hualien, Taiwan, that historically has had slightly more destructive earthquakes in the past than Lotung. The objectives of the LSST project is as follows: To obtain earthquake-induced SSI data at a stiff soil site having similar prototypical nuclear power plant soil conditions. To confirm the findings and methodologies validated against the Lotung soft soil SSI data for prototypical plant condition applications. To further validate the technical basis of realistic SSI analysis approaches. To further support the resolution of USI A-40 Seismic Design Criteria issue. These objectives will be accomplished through an integrated and carefully planned experimental program consisting of: soil characterization, test model design and field construction, instrumentation layout and deployment, in-situ geophysical information collection, forced vibration test, and synthesis of results and findings. The LSST is a joint effort among many interested parties. EPRI and Taipower are the organizers of the program and have the lead in planning and managing the program

  12. Large-Scale Seismic Test Program at Hualien, Taiwan

    International Nuclear Information System (INIS)

    Tang, H.T.; Graves, H.L.; Chen, P.C.

    1992-01-01

    The Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, is a follow-on to the soil-structure interaction (SSI) experiments at Lotung, Taiwan. The planned SSI studies will be performed at a stiff soil site in Hualien, Taiwan, that historically has had slightly more destructive earthquakes in the past than Lotung. The LSST is a joint effort among many interested parties. Electric Power Research Institute (EPRI) and Taipower are the organizers of the program and have the lead in planning and managing the program. Other organizations participating in the LSST program are US Nuclear Regulatory Commission, the Central Research Institute of Electric Power Industry, the Tokyo Electric Power Company, the Commissariat A L'Energie Atomique, Electricite de France and Framatome. The LSST was initiated in January 1990, and is envisioned to be five years in duration. Based on the assumption of stiff soil and confirmed by soil boring and geophysical results the test model was designed to provide data needed for SSI studies covering: free-field input, nonlinear soil response, non-rigid body SSI, torsional response, kinematic interaction, spatial incoherency and other effects. Taipower had the lead in design of the test model and received significant input from other LSST members. Questions raised by LSST members were on embedment effects, model stiffness, base shear, and openings for equipment. This paper describes progress in site preparation, design and construction of the model and development of an instrumentation plan

  13. Origin of large-scale cell structure in the universe

    International Nuclear Information System (INIS)

    Zel'dovich, Y.B.

    1982-01-01

    A qualitative explanation is offered for the characteristic global structure of the universe, wherein ''black'' regions devoid of galaxies are surrounded on all sides by closed, comparatively thin, ''bright'' layers populated by galaxies. The interpretation rests on some very general arguments regarding the growth of large-scale perturbations in a cold gas

  14. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 4. Fractals and the Large-Scale Structure in the Universe - Is the Cosmological Principle Valid? A K Mittal T R Seshadri. General Article Volume 7 Issue 4 April 2002 pp 39-47 ...

  15. Test on large-scale seismic isolation elements, 2

    International Nuclear Information System (INIS)

    Mazda, T.; Moteki, M.; Ishida, K.; Shiojiri, H.; Fujita, T.

    1991-01-01

    Seismic isolation test program of Central Research Inst. of Electric Power Industry (CRIEPI) to apply seismic isolation to Fast Breeder Reactor (FBR) plant was started in 1987. In this test program, demonstration test of seismic isolation elements was considered as one of the most important research items. Facilities for testing seismic isolation elements were built in Abiko Research Laboratory of CRIEPI. Various tests of large-scale seismic isolation elements were conducted up to this day. Many important test data to develop design technical guidelines was obtained. (author)

  16. Some Statistics for Measuring Large-Scale Structure

    OpenAIRE

    Brandenberger, Robert H.; Kaplan, David M.; A, Stephen; Ramsey

    1993-01-01

    Good statistics for measuring large-scale structure in the Universe must be able to distinguish between different models of structure formation. In this paper, two and three dimensional ``counts in cell" statistics and a new ``discrete genus statistic" are applied to toy versions of several popular theories of structure formation: random phase cold dark matter model, cosmic string models, and global texture scenario. All three statistics appear quite promising in terms of differentiating betw...

  17. On soft limits of large-scale structure correlation functions

    International Nuclear Information System (INIS)

    Sagunski, Laura

    2016-08-01

    Large-scale structure surveys have the potential to become the leading probe for precision cosmology in the next decade. To extract valuable information on the cosmological evolution of the Universe from the observational data, it is of major importance to derive accurate theoretical predictions for the statistical large-scale structure observables, such as the power spectrum and the bispectrum of (dark) matter density perturbations. Hence, one of the greatest challenges of modern cosmology is to theoretically understand the non-linear dynamics of large-scale structure formation in the Universe from first principles. While analytic approaches to describe the large-scale structure formation are usually based on the framework of non-relativistic cosmological perturbation theory, we pursue another road in this thesis and develop methods to derive generic, non-perturbative statements about large-scale structure correlation functions. We study unequal- and equal-time correlation functions of density and velocity perturbations in the limit where one of their wavenumbers becomes small, that is, in the soft limit. In the soft limit, it is possible to link (N+1)-point and N-point correlation functions to non-perturbative 'consistency conditions'. These provide in turn a powerful tool to test fundamental aspects of the underlying theory at hand. In this work, we first rederive the (resummed) consistency conditions at unequal times by using the so-called eikonal approximation. The main appeal of the unequal-time consistency conditions is that they are solely based on symmetry arguments and thus are universal. Proceeding from this, we direct our attention to consistency conditions at equal times, which, on the other hand, depend on the interplay between soft and hard modes. We explore the existence and validity of equal-time consistency conditions within and beyond perturbation theory. For this purpose, we investigate the predictions for the soft limit of the

  18. On soft limits of large-scale structure correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Sagunski, Laura

    2016-08-15

    Large-scale structure surveys have the potential to become the leading probe for precision cosmology in the next decade. To extract valuable information on the cosmological evolution of the Universe from the observational data, it is of major importance to derive accurate theoretical predictions for the statistical large-scale structure observables, such as the power spectrum and the bispectrum of (dark) matter density perturbations. Hence, one of the greatest challenges of modern cosmology is to theoretically understand the non-linear dynamics of large-scale structure formation in the Universe from first principles. While analytic approaches to describe the large-scale structure formation are usually based on the framework of non-relativistic cosmological perturbation theory, we pursue another road in this thesis and develop methods to derive generic, non-perturbative statements about large-scale structure correlation functions. We study unequal- and equal-time correlation functions of density and velocity perturbations in the limit where one of their wavenumbers becomes small, that is, in the soft limit. In the soft limit, it is possible to link (N+1)-point and N-point correlation functions to non-perturbative 'consistency conditions'. These provide in turn a powerful tool to test fundamental aspects of the underlying theory at hand. In this work, we first rederive the (resummed) consistency conditions at unequal times by using the so-called eikonal approximation. The main appeal of the unequal-time consistency conditions is that they are solely based on symmetry arguments and thus are universal. Proceeding from this, we direct our attention to consistency conditions at equal times, which, on the other hand, depend on the interplay between soft and hard modes. We explore the existence and validity of equal-time consistency conditions within and beyond perturbation theory. For this purpose, we investigate the predictions for the soft limit of the

  19. Lagrangian space consistency relation for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-01-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space

  20. Results of Large-Scale Spacecraft Flammability Tests

    Science.gov (United States)

    Ferkul, Paul; Olson, Sandra; Urban, David L.; Ruff, Gary A.; Easton, John; T'ien, James S.; Liao, Ta-Ting T.; Fernandez-Pello, A. Carlos; Torero, Jose L.; Eigenbrand, Christian; hide

    2017-01-01

    For the first time, a large-scale fire was intentionally set inside a spacecraft while in orbit. Testing in low gravity aboard spacecraft had been limited to samples of modest size: for thin fuels the longest samples burned were around 15 cm in length and thick fuel samples have been even smaller. This is despite the fact that fire is a catastrophic hazard for spaceflight and the spread and growth of a fire, combined with its interactions with the vehicle cannot be expected to scale linearly. While every type of occupied structure on earth has been the subject of full scale fire testing, this had never been attempted in space owing to the complexity, cost, risk and absence of a safe location. Thus, there is a gap in knowledge of fire behavior in spacecraft. The recent utilization of large, unmanned, resupply craft has provided the needed capability: a habitable but unoccupied spacecraft in low earth orbit. One such vehicle was used to study the flame spread over a 94 x 40.6 cm thin charring solid (fiberglasscotton fabric). The sample was an order of magnitude larger than anything studied to date in microgravity and was of sufficient scale that it consumed 1.5 of the available oxygen. The experiment which is called Saffire consisted of two tests, forward or concurrent flame spread (with the direction of flow) and opposed flame spread (against the direction of flow). The average forced air speed was 20 cms. For the concurrent flame spread test, the flame size remained constrained after the ignition transient, which is not the case in 1-g. These results were qualitatively different from those on earth where an upward-spreading flame on a sample of this size accelerates and grows. In addition, a curious effect of the chamber size is noted. Compared to previous microgravity work in smaller tunnels, the flame in the larger tunnel spread more slowly, even for a wider sample. This is attributed to the effect of flow acceleration in the smaller tunnels as a result of hot

  1. Large scale sodium interactions. Part 1. Test facility design

    International Nuclear Information System (INIS)

    King, D.L.; Smaardyk, J.E.; Sallach, R.A.

    1977-01-01

    During the design of the test facility for large scale sodium interaction testing, an attempt was made to keep the system as simple and yet versatile as possible; therefore, a once through design was employed as opposed to any type of conventional sodium ''loop.'' The initial series of tests conducted at the facility call for rapidly dropping from 20 kg to 225 kg of sodium at temperatures from 825 0 K to 1125 0 K into concrete crucibles. The basic system layout is described. A commercial drum heater is used to melt the sodium which is in 55 gallon drums and then a slight argon pressurization is used to force the liquid sodium through a metallic filter and into a dump tank. Then the sodium dump tank is heated to the desired temperature. A diaphragm is mechanically ruptured and the sodium is dumped into a crucible that is housed inside a large steel test chamber

  2. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  3. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  4. Using Large Scale Test Results for Pedagogical Purposes

    DEFF Research Database (Denmark)

    Dolin, Jens

    2012-01-01

    The use and influence of large scale tests (LST), both national and international, has increased dramatically within the last decade. This process has revealed a tension between the legitimate need for information about the performance of the educational system and teachers to inform policy......, and the teachers’ and students’ use of this information for pedagogical purposes in the classroom. We know well how the policy makers interpret and use the outcomes of such tests, but we know less about how teachers make use of LSTs to inform their pedagogical practice. An important question is whether...... there is a contradiction between the political system’s use of LST and teachers’ (possible) pedagogical use of LST. And if yes: What is a contradiction based on? This presentation will give some results from a systematic review on how tests have influenced the pedagogical practice. The research revealed many of the fatal...

  5. Nonlinear evolution of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Frenk, C.S.; White, S.D.M.; Davis, M.

    1983-01-01

    Using N-body simulations we study the nonlinear development of primordial density perturbation in an Einstein--de Sitter universe. We compare the evolution of an initial distribution without small-scale density fluctuations to evolution from a random Poisson distribution. These initial conditions mimic the assumptions of the adiabatic and isothermal theories of galaxy formation. The large-scale structures which form in the two cases are markedly dissimilar. In particular, the correlation function xi(r) and the visual appearance of our adiabatic (or ''pancake'') models match better the observed distribution of galaxies. This distribution is characterized by large-scale filamentary structure. Because the pancake models do not evolve in a self-similar fashion, the slope of xi(r) steepens with time; as a result there is a unique epoch at which these models fit the galaxy observations. We find the ratio of cutoff length to correlation length at this time to be lambda/sub min//r 0 = 5.1; its expected value in a neutrino dominated universe is 4(Ωh) -1 (H 0 = 100h km s -1 Mpc -1 ). At early epochs these models predict a negligible amplitude for xi(r) and could explain the lack of measurable clustering in the Lyα absorption lines of high-redshift quasars. However, large-scale structure in our models collapses after z = 2. If this collapse precedes galaxy formation as in the usual pancake theory, galaxies formed uncomfortably recently. The extent of this problem may depend on the cosmological model used; the present series of experiments should be extended in the future to include models with Ω<1

  6. Inflationary tensor fossils in large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Dimastrogiovanni, Emanuela [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Fasiello, Matteo [Department of Physics, Case Western Reserve University, Cleveland, OH 44106 (United States); Jeong, Donghui [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Kamionkowski, Marc, E-mail: ema@physics.umn.edu, E-mail: mrf65@case.edu, E-mail: duj13@psu.edu, E-mail: kamion@jhu.edu [Department of Physics and Astronomy, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218 (United States)

    2014-12-01

    Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.

  7. Large-scale structure in the universe: Theory vs observations

    International Nuclear Information System (INIS)

    Kashlinsky, A.; Jones, B.J.T.

    1990-01-01

    A variety of observations constrain models of the origin of large scale cosmic structures. We review here the elements of current theories and comment in detail on which of the current observational data provide the principal constraints. We point out that enough observational data have accumulated to constrain (and perhaps determine) the power spectrum of primordial density fluctuations over a very large range of scales. We discuss the theories in the light of observational data and focus on the potential of future observations in providing even (and ever) tighter constraints. (orig.)

  8. Cosmological parameters from large scale structure - geometric versus shape information

    CERN Document Server

    Hamann, Jan; Lesgourgues, Julien; Rampf, Cornelius; Wong, Yvonne Y Y

    2010-01-01

    The matter power spectrum as derived from large scale structure (LSS) surveys contains two important and distinct pieces of information: an overall smooth shape and the imprint of baryon acoustic oscillations (BAO). We investigate the separate impact of these two types of information on cosmological parameter estimation, and show that for the simplest cosmological models, the broad-band shape information currently contained in the SDSS DR7 halo power spectrum (HPS) is by far superseded by geometric information derived from the baryonic features. An immediate corollary is that contrary to popular beliefs, the upper limit on the neutrino mass m_\

  9. Geophysical mapping of complex glaciogenic large-scale structures

    DEFF Research Database (Denmark)

    Høyer, Anne-Sophie

    2013-01-01

    This thesis presents the main results of a four year PhD study concerning the use of geophysical data in geological mapping. The study is related to the Geocenter project, “KOMPLEKS”, which focuses on the mapping of complex, large-scale geological structures. The study area is approximately 100 km2...... data types and co-interpret them in order to improve our geological understanding. However, in order to perform this successfully, methodological considerations are necessary. For instance, a structure indicated by a reflection in the seismic data is not always apparent in the resistivity data...... information) can be collected. The geophysical data are used together with geological analyses from boreholes and pits to interpret the geological history of the hill-island. The geophysical data reveal that the glaciotectonic structures truncate at the surface. The directions of the structures were mapped...

  10. Iodine oxides in large-scale THAI tests

    International Nuclear Information System (INIS)

    Funke, F.; Langrock, G.; Kanzleiter, T.; Poss, G.; Fischer, K.; Kühnel, A.; Weber, G.; Allelein, H.-J.

    2012-01-01

    Highlights: ► Iodine oxide particles were produced from gaseous iodine and ozone. ► Ozone replaced the effect of ionizing radiation in the large-scale THAI facility. ► The mean diameter of the iodine oxide particles was about 0.35 μm. ► Particle formation was faster than the chemical reaction between iodine and ozone. ► Deposition of iodine oxide particles was slow in the absence of other aerosols. - Abstract: The conversion of gaseous molecular iodine into iodine oxide aerosols has significant relevance in the understanding of the fission product iodine volatility in a LWR containment during severe accidents. In containment, the high radiation field caused by fission products released from the reactor core induces radiolytic oxidation into iodine oxides. To study the characteristics and the behaviour of iodine oxides in large scale, two THAI tests Iod-13 and Iod-14 were performed, simulating radiolytic oxidation of molecular iodine by reaction of iodine with ozone, with ozone injected from an ozone generator. The observed iodine oxides form submicron particles with mean volume-related diameters of about 0.35 μm and show low deposition rates in the THAI tests performed in the absence of other nuclear aerosols. Formation of iodine aerosols from gaseous precursors iodine and ozone is fast as compared to their chemical interaction. The current approach in empirical iodine containment behaviour models in severe accidents, including the radiolytic production of I 2 -oxidizing agents followed by the I 2 oxidation itself, is confirmed by these THAI tests.

  11. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  12. Large scale intender test program to measure sub gouge displacements

    Energy Technology Data Exchange (ETDEWEB)

    Been, Ken; Lopez, Juan [Golder Associates Inc, Houston, TX (United States); Sancio, Rodolfo [MMI Engineering Inc., Houston, TX (United States)

    2011-07-01

    The production of submarine pipelines in an offshore environment covered with ice is very challenging. Several precautions must be taken such as burying the pipelines to protect them from ice movement caused by gouging. The estimation of the subgouge displacements is a key factor in pipeline design for ice gouged environments. This paper investigated a method to measure subgouge displacements. An experimental program was implemented in an open field to produce large scale idealized gouges on engineered soil beds (sand and clay). The horizontal force required to produce the gouge, the subgouge displacements in the soil and the strain imposed by these displacements were monitored on a buried model pipeline. The results showed that for a given keel, the gouge depth was inversely proportional to undrained shear strength in clay. The subgouge displacements measured did not show a relationship with the gouge depth, width or soil density in sand and clay tests.

  13. Large-scale field testing on flexible shallow landslide barriers

    Science.gov (United States)

    Bugnion, Louis; Volkwein, Axel; Wendeler, Corinna; Roth, Andrea

    2010-05-01

    Open shallow landslides occur regularly in a wide range of natural terrains. Generally, they are difficult to predict and result in damages to properties and disruption of transportation systems. In order to improve the knowledge about the physical process itself and to develop new protection measures, large-scale field experiments were conducted in Veltheim, Switzerland. Material was released down a 30° inclined test slope into a flexible barrier. The flow as well as the impact into the barrier was monitored using various measurement techniques. Laser devices recording flow heights, a special force plate measuring normal and shear basal forces as well as load cells for impact pressures were installed along the test slope. In addition, load cells were built in the support and retaining cables of the barrier to provide data for detailed back-calculation of load distribution during impact. For the last test series an additional guiding wall in flow direction on both sides of the barrier was installed to achieve higher impact pressures in the middle of the barrier. With these guiding walls the flow is not able to spread out before hitting the barrier. A special constructed release mechanism simulating the sudden failure of the slope was designed such that about 50 m3 of mixed earth and gravel saturated with water can be released in an instant. Analysis of cable forces combined with impact pressures and velocity measurements during a test series allow us now to develop a load model for the barrier design. First numerical simulations with the software tool FARO, originally developed for rockfall barriers and afterwards calibrated for debris flow impacts, lead already to structural improvements on barrier design. Decisive for the barrier design is the first dynamic impact pressure depending on the flow velocity and afterwards the hydrostatic pressure of the complete retained material behind the barrier. Therefore volume estimation of open shallow landslides by assessing

  14. In situ vitrification large-scale operational acceptance test analysis

    International Nuclear Information System (INIS)

    Buelt, J.L.; Carter, J.G.

    1986-05-01

    A thermal treatment process is currently under study to provide possible enhancement of in-place stabilization of transuranic and chemically contaminated soil sites. The process is known as in situ vitrification (ISV). In situ vitrification is a remedial action process that destroys solid and liquid organic contaminants and incorporates radionuclides into a glass-like material that renders contaminants substantially less mobile and less likely to impact the environment. A large-scale operational acceptance test (LSOAT) was recently completed in which more than 180 t of vitrified soil were produced in each of three adjacent settings. The LSOAT demonstrated that the process conforms to the functional design criteria necessary for the large-scale radioactive test (LSRT) to be conducted following verification of the performance capabilities of the process. The energy requirements and vitrified block size, shape, and mass are sufficiently equivalent to those predicted by the ISV mathematical model to confirm its usefulness as a predictive tool. The LSOAT demonstrated an electrode replacement technique, which can be used if an electrode fails, and techniques have been identified to minimize air oxidation, thereby extending electrode life. A statistical analysis was employed during the LSOAT to identify graphite collars and an insulative surface as successful cold cap subsidence techniques. The LSOAT also showed that even under worst-case conditions, the off-gas system exceeds the flow requirements necessary to maintain a negative pressure on the hood covering the area being vitrified. The retention of simulated radionuclides and chemicals in the soil and off-gas system exceeds requirements so that projected emissions are one to two orders of magnitude below the maximum permissible concentrations of contaminants at the stack

  15. Solving large scale structure in ten easy steps with COLA

    Energy Technology Data Exchange (ETDEWEB)

    Tassev, Svetlin [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544 (United States); Zaldarriaga, Matias [School of Natural Sciences, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States); Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu [Center for Astrophysics, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  16. Analysis of the forced vibration test of the Hualien large scale soil-structure interaction model using a flexible volume substructuring method

    International Nuclear Information System (INIS)

    Tang, H.T.; Nakamura, N.

    1995-01-01

    A 1/4-scale cylindrical reactor containment model was constructed in Hualien, Taiwan for foil-structure interaction (SSI) effect evaluation and SSI analysis procedure verification. Forced vibration tests were executed before backfill (FVT-1) and after backfill (FVT-2) to characterize soil-structure system characteristics under low excitations. A number of organizations participated in the pre-test blind prediction and post-test correlation analyses of the forced vibration test using various industry familiar methods. In the current study, correlation analyses were performed using a three-dimensional flexible volume substructuring method. The results are reported and soil property sensitivities are evaluated in the paper. (J.P.N.)

  17. Complex modular structure of large-scale brain networks

    Science.gov (United States)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  18. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    , telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties.......Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV...

  19. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom; Femiani, John; Wonka, Peter; Mitra, Niloy J.

    2017-01-01

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  20. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom

    2017-11-22

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  1. Large-Scale Spray Releases: Additional Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  2. Large scale high strain-rate tests of concrete

    Directory of Open Access Journals (Sweden)

    Kiefer R.

    2012-08-01

    Full Text Available This work presents the stages of development of some innovative equipment, based on Hopkinson bar techniques, for performing large scale dynamic tests of concrete specimens. The activity is centered at the recently upgraded HOPLAB facility, which is basically a split Hopkinson bar with a total length of approximately 200 m and with bar diameters of 72 mm. Through pre-tensioning and suddenly releasing a steel cable, force pulses of up to 2 MN, 250 μs rise time and 40 ms duration can be generated and applied to the specimen tested. The dynamic compression loading has first been treated and several modifications in the basic configuration have been introduced. Twin incident and transmitter bars have been installed with strong steel plates at their ends where large specimens can be accommodated. A series of calibration and qualification tests has been conducted and the first real tests on concrete cylindrical specimens of 20cm diameter and up to 40cm length have commenced. Preliminary results from the analysis of the recorded signals indicate proper Hopkinson bar testing conditions and reliable functioning of the facility.

  3. The effective field theory of cosmological large scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco, John Joseph M. [Stanford Univ., Stanford, CA (United States); Hertzberg, Mark P. [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Senatore, Leonardo [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  4. Systematic renormalization of the effective theory of Large Scale Structure

    International Nuclear Information System (INIS)

    Abolhasani, Ali Akbar; Mirbabayi, Mehrdad; Pajer, Enrico

    2016-01-01

    A perturbative description of Large Scale Structure is a cornerstone of our understanding of the observed distribution of matter in the universe. Renormalization is an essential and defining step to make this description physical and predictive. Here we introduce a systematic renormalization procedure, which neatly associates counterterms to the UV-sensitive diagrams order by order, as it is commonly done in quantum field theory. As a concrete example, we renormalize the one-loop power spectrum and bispectrum of both density and velocity. In addition, we present a series of results that are valid to all orders in perturbation theory. First, we show that while systematic renormalization requires temporally non-local counterterms, in practice one can use an equivalent basis made of local operators. We give an explicit prescription to generate all counterterms allowed by the symmetries. Second, we present a formal proof of the well-known general argument that the contribution of short distance perturbations to large scale density contrast δ and momentum density π(k) scale as k 2 and k, respectively. Third, we demonstrate that the common practice of introducing counterterms only in the Euler equation when one is interested in correlators of δ is indeed valid to all orders.

  5. DEMNUni: massive neutrinos and the bispectrum of large scale structures

    Science.gov (United States)

    Ruggeri, Rossana; Castorina, Emanuele; Carbone, Carmelita; Sefusatti, Emiliano

    2018-03-01

    The main effect of massive neutrinos on the large-scale structure consists in a few percent suppression of matter perturbations on all scales below their free-streaming scale. Such effect is of particular importance as it allows to constraint the value of the sum of neutrino masses from measurements of the galaxy power spectrum. In this work, we present the first measurements of the next higher-order correlation function, the bispectrum, from N-body simulations that include massive neutrinos as particles. This is the simplest statistics characterising the non-Gaussian properties of the matter and dark matter halos distributions. We investigate, in the first place, the suppression due to massive neutrinos on the matter bispectrum, comparing our measurements with the simplest perturbation theory predictions, finding the approximation of neutrinos contributing at quadratic order in perturbation theory to provide a good fit to the measurements in the simulations. On the other hand, as expected, a linear approximation for neutrino perturbations would lead to Script O(fν) errors on the total matter bispectrum at large scales. We then attempt an extension of previous results on the universality of linear halo bias in neutrino cosmologies, to non-linear and non-local corrections finding consistent results with the power spectrum analysis.

  6. Divergence of perturbation theory in large scale structures

    Science.gov (United States)

    Pajer, Enrico; van der Woude, Drian

    2018-05-01

    We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.

  7. Towards a 'standard model' of large scale structure formation

    International Nuclear Information System (INIS)

    Shafi, Q.

    1994-01-01

    We explore constraints on inflationary models employing data on large scale structure mainly from COBE temperature anisotropies and IRAS selected galaxy surveys. In models where the tensor contribution to the COBE signal is negligible, we find that the spectral index of density fluctuations n must exceed 0.7. Furthermore the COBE signal cannot be dominated by the tensor component, implying n > 0.85 in such models. The data favors cold plus hot dark matter models with n equal or close to unity and Ω HDM ∼ 0.2 - 0.35. Realistic grand unified theories, including supersymmetric versions, which produce inflation with these properties are presented. (author). 46 refs, 8 figs

  8. Cosmological perturbations from quantum fluctuations to large scale structure

    International Nuclear Information System (INIS)

    Bardeen, J.M.

    1988-01-01

    Classical perturbation theory is developed from the 3 + 1 form of the Einstein equations. A somewhat unusual form of the perturbation equations in the synchronous gauge is recommended for carrying out computations, but interpretation is based on certain hypersurface-invariant combinations of the variables. The formalism is used to analyze the origin of density perturbations from quantum fluctuations during inflation, with particular emphasis on dealing with 'double inflation' and deviations from the Zel'dovich spectrum. The evolution of the density perturbation to the present gives the final density perturbation power spectrum, whose relationship to observed large scale structure is discussed in the context of simple cold-dark-matter biasing schemes. 86 refs

  9. Large-Scale Spray Releases: Initial Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and

  10. Large-Scale Structure Behind The Milky Way with ALFAZOA

    Science.gov (United States)

    Sanchez Barrantes, Monica; Henning, Patricia A.; Momjian, Emmanuel; McIntyre, Travis; Minchin, Robert F.

    2018-06-01

    The region of the sky behind the Milky Way (the Zone of Avoidance; ZOA) is not well studied due to high obscuration from gas and dust in our galaxy as well as stellar confusion, which results in low detection rate of galaxies in this region. Because of this, little is known about the distribution of galaxies in the ZOA, and other all sky redshift surveys have incomplete maps (e.g. the 2MASS Redshift survey in NIR has a gap of 5-8 deg around the Galactic plane). There is still controversy about the dipole anisotropy calculated from the comparison between the CMB and galaxy and redshift surveys, in part due to the incomplete sky mapping and redshift depth of these surveys. Fortunately, there is no ZOA at radio wavelengths because such wavelengths can pass unimpeded through dust and are not affected by stellar confusion. Therefore, we can detect and make a map of the distribution of obscured galaxies that contain the 21cm neutral hydrogen emission line, and trace the large-scale structure across the Galactic plane. The Arecibo L-Band Feed Array Zone of Avoidance (ALFAZOA) survey is a blind HI survey for galaxies behind the Milky Way that covers more than 1000 square degrees of the sky, conducted in two phases: shallow (completed) and deep (ongoing). We show the results of the finished shallow phase of the survey, which mapped a region between the galactic longitude l=30-75 deg, and latitude b <|10 deg|, and detected 418 galaxies to about 12,000 km/s, including galaxy properties and mapped large-scale structure. We do the same for new results from the deep phase, which is ongoing and covers 30 < l < 75 deg and b < |2| deg for the inner galaxy and 175 < l < 207 deg, with -2 < b < 1 for the outer galaxy.

  11. Properties Important To Mixing For WTP Large Scale Integrated Testing

    International Nuclear Information System (INIS)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-01-01

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  12. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    Energy Technology Data Exchange (ETDEWEB)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  13. Large scale structures in liquid crystal/clay colloids

    Science.gov (United States)

    van Duijneveldt, Jeroen S.; Klein, Susanne; Leach, Edward; Pizzey, Claire; Richardson, Robert M.

    2005-04-01

    Suspensions of three different clays in K15, a thermotropic liquid crystal, have been studied by optical microscopy and small angle x-ray scattering. The three clays were claytone AF, a surface treated natural montmorillonite, laponite RD, a synthetic hectorite, and mined sepiolite. The claytone and laponite were sterically stabilized whereas sepiolite formed a relatively stable suspension in K15 without any surface treatment. Micrographs of the different suspensions revealed that all three suspensions contained large scale structures. The nature of these aggregates was investigated using small angle x-ray scattering. For the clays with sheet-like particles, claytone and laponite, the flocs contain a mixture of stacked and single platelets. The basal spacing in the stacks was independent of particle concentration in the suspension and the phase of the solvent. The number of platelets in the stack and their percentage in the suspension varied with concentration and the aspect ratio of the platelets. The lath shaped sepiolite did not show any tendency to organize into ordered structures. Here the aggregates are networks of randomly oriented single rods.

  14. Large scale structures in liquid crystal/clay colloids

    International Nuclear Information System (INIS)

    Duijneveldt, Jeroen S van; Klein, Susanne; Leach, Edward; Pizzey, Claire; Richardson, Robert M

    2005-01-01

    Suspensions of three different clays in K15, a thermotropic liquid crystal, have been studied by optical microscopy and small angle x-ray scattering. The three clays were claytone AF, a surface treated natural montmorillonite, laponite RD, a synthetic hectorite, and mined sepiolite. The claytone and laponite were sterically stabilized whereas sepiolite formed a relatively stable suspension in K15 without any surface treatment. Micrographs of the different suspensions revealed that all three suspensions contained large scale structures. The nature of these aggregates was investigated using small angle x-ray scattering. For the clays with sheet-like particles, claytone and laponite, the flocs contain a mixture of stacked and single platelets. The basal spacing in the stacks was independent of particle concentration in the suspension and the phase of the solvent. The number of platelets in the stack and their percentage in the suspension varied with concentration and the aspect ratio of the platelets. The lath shaped sepiolite did not show any tendency to organize into ordered structures. Here the aggregates are networks of randomly oriented single rods

  15. Origin of the large scale structures of the universe

    International Nuclear Information System (INIS)

    Oaknin, David H.

    2004-01-01

    We revise the statistical properties of the primordial cosmological density anisotropies that, at the time of matter-radiation equality, seeded the gravitational development of large scale structures in the otherwise homogeneous and isotropic Friedmann-Robertson-Walker flat universe. Our analysis shows that random fluctuations of the density field at the same instant of equality and with comoving wavelength shorter than the causal horizon at that time can naturally account, when globally constrained to conserve the total mass (energy) of the system, for the observed scale invariance of the anisotropies over cosmologically large comoving volumes. Statistical systems with similar features are generically known as glasslike or latticelike. Obviously, these conclusions conflict with the widely accepted understanding of the primordial structures reported in the literature, which requires an epoch of inflationary cosmology to precede the standard expansion of the universe. The origin of the conflict must be found in the widespread, but unjustified, claim that scale invariant mass (energy) anisotropies at the instant of equality over comoving volumes of cosmological size, larger than the causal horizon at the time, must be generated by fluctuations in the density field with comparably large comoving wavelength

  16. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    Science.gov (United States)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out

  17. Alignment between galaxies and large-scale structure

    International Nuclear Information System (INIS)

    Faltenbacher, A.; Li Cheng; White, Simon D. M.; Jing, Yi-Peng; Mao Shude; Wang Jie

    2009-01-01

    Based on the Sloan Digital Sky Survey DR6 (SDSS) and the Millennium Simulation (MS), we investigate the alignment between galaxies and large-scale structure. For this purpose, we develop two new statistical tools, namely the alignment correlation function and the cos(2θ)-statistic. The former is a two-dimensional extension of the traditional two-point correlation function and the latter is related to the ellipticity correlation function used for cosmic shear measurements. Both are based on the cross correlation between a sample of galaxies with orientations and a reference sample which represents the large-scale structure. We apply the new statistics to the SDSS galaxy catalog. The alignment correlation function reveals an overabundance of reference galaxies along the major axes of red, luminous (L ∼ * ) galaxies out to projected separations of 60 h- 1 Mpc. The signal increases with central galaxy luminosity. No alignment signal is detected for blue galaxies. The cos(2θ)-statistic yields very similar results. Starting from a MS semi-analytic galaxy catalog, we assign an orientation to each red, luminous and central galaxy, based on that of the central region of the host halo (with size similar to that of the stellar galaxy). As an alternative, we use the orientation of the host halo itself. We find a mean projected misalignment between a halo and its central region of ∼ 25 deg. The misalignment decreases slightly with increasing luminosity of the central galaxy. Using the orientations and luminosities of the semi-analytic galaxies, we repeat our alignment analysis on mock surveys of the MS. Agreement with the SDSS results is good if the central orientations are used. Predictions using the halo orientations as proxies for central galaxy orientations overestimate the observed alignment by more than a factor of 2. Finally, the large volume of the MS allows us to generate a two-dimensional map of the alignment correlation function, which shows the reference

  18. Comparison of vibration test results for Atucha II NPP and large scale concrete block models

    International Nuclear Information System (INIS)

    Iizuka, S.; Konno, T.; Prato, C.A.

    2001-01-01

    In order to study the soil structure interaction of reactor building that could be constructed on a Quaternary soil, a comparison study of the soil structure interaction springs was performed between full scale vibration test results of Atucha II NPP and vibration test results of large scale concrete block models constructed on Quaternary soil. This comparison study provides a case data of soil structure interaction springs on Quaternary soil with different foundation size and stiffness. (author)

  19. Characterizing unknown systematics in large scale structure surveys

    International Nuclear Information System (INIS)

    Agarwal, Nishant; Ho, Shirley; Myers, Adam D.; Seo, Hee-Jong; Ross, Ashley J.; Bahcall, Neta; Brinkmann, Jonathan; Eisenstein, Daniel J.; Muna, Demitri; Palanque-Delabrouille, Nathalie; Yèche, Christophe; Pâris, Isabelle; Petitjean, Patrick; Schneider, Donald P.; Streblyanska, Alina; Weaver, Benjamin A.

    2014-01-01

    Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data, we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study

  20. Inflation and large scale structure formation after COBE

    International Nuclear Information System (INIS)

    Schaefer, R.K.; Shafi, Q.

    1992-06-01

    The simplest realizations of the new inflationary scenario typically give rise to primordial density fluctuations which deviate logarithmically from the scale free Harrison-Zeldovich spectrum. We consider a number of such examples and, in each case we normalize the amplitude of the fluctuations with the recent COBE measurement of the microwave background anisotropy. The predictions for the bulk velocities as well as anisotropies on smaller (1-2 degrees) angular scales are compared with the Harrison-Zeldovich case. Deviations from the latter range from a few to about 15 percent. We also estimate the redshift beyond which the quasars would not be expected to be seen. The inflationary quasar cutoff redshifts can vary by as much as 25% from the Harrison-Zeldovich case. We find that the inflationary scenario provides a good starting point for a theory of large scale structure in the universe provided the dark matter is a combination of cold plus (10-30%) hot components. (author). 27 refs, 1 fig., 1 tab

  1. EFT of large scale structures in redshift space

    Science.gov (United States)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun

    2018-03-01

    We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.

  2. Characterizing unknown systematics in large scale structure surveys

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Nishant; Ho, Shirley [McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Seo, Hee-Jong [Berkeley Center for Cosmological Physics, LBL and Department of Physics, University of California, Berkeley, CA 94720 (United States); Ross, Ashley J. [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth, PO1 3FX (United Kingdom); Bahcall, Neta [Princeton University Observatory, Peyton Hall, Princeton, NJ 08544 (United States); Brinkmann, Jonathan [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349 (United States); Eisenstein, Daniel J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Muna, Demitri [Department of Astronomy, Ohio State University, Columbus, OH 43210 (United States); Palanque-Delabrouille, Nathalie; Yèche, Christophe [CEA, Centre de Saclay, Irfu/SPP, F-91191 Gif-sur-Yvette (France); Pâris, Isabelle [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago (Chile); Petitjean, Patrick [Université Paris 6 et CNRS, Institut d' Astrophysique de Paris, 98bis blvd. Arago, 75014 Paris (France); Schneider, Donald P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Streblyanska, Alina [Instituto de Astrofisica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Weaver, Benjamin A., E-mail: nishanta@andrew.cmu.edu [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2014-04-01

    Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data, we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study.

  3. Soft-Pion theorems for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2014-01-01

    Consistency relations — which relate an N-point function to a squeezed (N+1)-point function — are useful in large scale structure (LSS) because of their non-perturbative nature: they hold even if the N-point function is deep in the nonlinear regime, and even if they involve astrophysically messy galaxy observables. The non-perturbative nature of the consistency relations is guaranteed by the fact that they are symmetry statements, in which the velocity plays the role of the soft pion. In this paper, we address two issues: (1) how to derive the relations systematically using the residual coordinate freedom in the Newtonian gauge, and relate them to known results in ζ-gauge (often used in studies of inflation); (2) under what conditions the consistency relations are violated. In the non-relativistic limit, our derivation reproduces the Newtonian consistency relation discovered by Kehagias and Riotto and Peloso and Pietroni. More generally, there is an infinite set of consistency relations, as is known in ζ-gauge. There is a one-to-one correspondence between symmetries in the two gauges; in particular, the Newtonian consistency relation follows from the dilation and special conformal symmetries in ζ-gauge. We probe the robustness of the consistency relations by studying models of galaxy dynamics and biasing. We give a systematic list of conditions under which the consistency relations are violated; violations occur if the galaxy bias is non-local in an infrared divergent way. We emphasize the relevance of the adiabatic mode condition, as distinct from symmetry considerations. As a by-product of our investigation, we discuss a simple fluid Lagrangian for LSS

  4. A testing facility for large scale models at 100 bar and 3000C to 10000C

    International Nuclear Information System (INIS)

    Zemann, H.

    1978-07-01

    A testing facility for large scale model tests is in construction under support of the Austrian Industry. It will contain a Prestressed Concrete Pressure Vessel (PCPV) with hot linear (300 0 C at 100 bar), an electrical heating system (1.2 MW, 1000 0 C), a gas supply system, and a cooling system for the testing space. The components themselves are models for advanced high temperature applications. The first main component which was tested successfully was the PCPV. Basic investigation of the building materials, improvements of concrete gauges, large scale model tests and measurements within the structural concrete and on the liner from the beginning of construction during the period of prestressing, the period of stabilization and the final pressurizing tests have been made. On the basis of these investigations a computer controlled safety surveillance system for long term high pressure, high temperature tests has been developed. (author)

  5. Large scale vibration tests on pile-group effects using blast-induced ground motion

    International Nuclear Information System (INIS)

    Katsuichirou Hijikata; Hideo Tanaka; Takayuki Hashimoto; Kazushige Fujiwara; Yuji Miyamoto; Osamu Kontani

    2005-01-01

    Extensive vibration tests have been performed on pile-supported structures at a large-scale mining site. Ground motions induced by large-scale blasting operations were used as excitation forces for vibration tests. The main objective of this research is to investigate the dynamic behavior of pile-supported structures, in particular, pile-group effects. Two test structures were constructed in an excavated 4 m deep pit. Their test-structures were exactly the same. One structure had 25 steel piles and the other had 4 piles. The test pit was backfilled with sand of appropriate grain size distributions to obtain good compaction, especially between the 25 piles. Accelerations were measured at the structures, in the test pit and in the adjacent free field, and pile strains were measured. Dynamic modal tests of the pile-supported structures and PS measurements of the test pit were performed before and after the vibration tests to detect changes in the natural frequencies of the soil-pile-structure systems and the soil stiffness. The vibration tests were performed six times with different levels of input motions. The maximum horizontal acceleration recorded at the adjacent ground surface varied from 57 cm/s 2 to 1,683 cm/s 2 according to the distances between the test site and the blast areas. (authors)

  6. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    Science.gov (United States)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4Destiny will be used in its third year as a high resolution, wide-field imager to conduct a weak lensing survey covering >lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  7. Vibration tests on pile-group foundations using large-scale blast excitation

    International Nuclear Information System (INIS)

    Tanaka, Hideo; Hijikata, Katsuichirou; Hashimoto, Takayuki; Fujiwara, Kazushige; Kontani, Osamu; Miyamoto, Yuji; Suzuki, Atsushi

    2005-01-01

    Extensive vibration tests have been performed on pile-supported structures at a large-scale mining site. Ground motions induced by large-scale blasting operations were used as excitation forces for vibration tests. The main objective of this research is to investigate the dynamic behavior of pile-supported structures, in particular, pile-group effects. Two test structures were constructed in an excavated 4 m deep pit. One structure had 25 steel tubular piles and the other had 4 piles. The super-structures were exactly the same. The test pit was backfilled with sand of appropriate grain size distributions in order to obtain good compaction, especially between the 25 piles. Accelerations were measured at the structures, in the test pit and in the adjacent free field, and pile strains were measured. The vibration tests were performed six times with different levels of input motions. The maximum horizontal acceleration recorded at the adjacent ground surface varied from 57 cm/s 2 to 1683 cm/s 2 according to the distances between the test site and the blast areas. Maximum strains were 13,400 micro-strains were recorded at the pile top of the 4-pile structure, which means that these piles were subjected to yielding

  8. Isolating relativistic effects in large-scale structure

    Science.gov (United States)

    Bonvin, Camille

    2014-12-01

    We present a fully relativistic calculation of the observed galaxy number counts in the linear regime. We show that besides the density fluctuations and redshift-space distortions, various relativistic effects contribute to observations at large scales. These effects all have the same physical origin: they result from the fact that our coordinate system, namely the galaxy redshift and the incoming photons’ direction, is distorted by inhomogeneities in our Universe. We then discuss the impact of the relativistic effects on the angular power spectrum and on the two-point correlation function in configuration space. We show that the latter is very well adapted to isolate the relativistic effects since it naturally makes use of the symmetries of the different contributions. In particular, we discuss how the Doppler effect and the gravitational redshift distortions can be isolated by looking for a dipole in the cross-correlation function between a bright and a faint population of galaxies.

  9. Vibration phenomena in large scale pressure suppression tests

    International Nuclear Information System (INIS)

    Aust, E.; Boettcher, G.; Kolb, M.; Sattler, P.; Vollbrandt, J.

    1982-01-01

    Structure und fluid vibration phenomena (acceleration, strain; pressure, level) were observed during blow-down experiments simulating a LOCA in the GKSS full scale multivent pressure suppression test facility. The paper describes first the source related excitations during the two regimes of condensation oscillation and of chugging, and deals then with the response vibrations of the facility's wetwell. Modal analyses of the wetwell were run using excitation by hammer and by shaker in order to separate phenomena that are particular to the GKSS facility from more general ones, i.e. phenomena specific to the fluid related parameters of blowdown and to the geometry of the vent pipes only. The lowest periodicities at about 12 and 16 Hz stem from the vent acoustics. A frequency of about 36 to 38 Hz prominent during chugging seems to result from the lowest local models of two of the wetwell's walls when coupled by the wetwell pool. Further peaks found during blowdown in the spectra of signals at higher frequencies correspond to global vibration modes of the wetwell. (orig.)

  10. Test on large-scale seismic isolation elements

    International Nuclear Information System (INIS)

    Mazda, T.; Shiojiri, H.; Oka, Y.; Fujita, T.; Seki, M.

    1989-01-01

    Demonstration test of seismic isolation elements is considered as one of the most important items in the application of seismic isolation system to fast breeder reactor (FBR) plant. Facilities for testing seismic isolation elements have been built. This paper reports on tests for fullscale laminated rubber bearing and reduced scale models are conducted. From the result of the tests, the laminated rubber bearings turn out to satisfy the specification. Their basic characteristics are confirmed from the tests with fullscale and reduced scale models. The ultimate capacity of the bearings under the condition of ordinary temperature are evaluated

  11. Hierarchical Cantor set in the large scale structure with torus geometry

    Energy Technology Data Exchange (ETDEWEB)

    Murdzek, R. [Physics Department, ' Al. I. Cuza' University, Blvd. Carol I, Nr. 11, Iassy 700506 (Romania)], E-mail: rmurdzek@yahoo.com

    2008-12-15

    The formation of large scale structures is considered within a model with string on toroidal space-time. Firstly, the space-time geometry is presented. In this geometry, the Universe is represented by a string describing a torus surface. Thereafter, the large scale structure of the Universe is derived from the string oscillations. The results are in agreement with the cellular structure of the large scale distribution and with the theory of a Cantorian space-time.

  12. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  13. A large scale test of the gaming-enhancement hypothesis

    Directory of Open Access Journals (Sweden)

    Andrew K. Przybylski

    2016-11-01

    Full Text Available A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis, has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people’s gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  14. A large scale test of the gaming-enhancement hypothesis.

    Science.gov (United States)

    Przybylski, Andrew K; Wang, John C

    2016-01-01

    A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  15. Goethite Bench-scale and Large-scale Preparation Tests

    Energy Technology Data Exchange (ETDEWEB)

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the

  16. Testing of valves and associated systems in large scale experiments

    International Nuclear Information System (INIS)

    Becker, M.

    1985-01-01

    The system examples dealt with are selected so that they cover a wide spectrum of technical tasks and limits. Therefore the flowing medium varies from pure steam flow via a mixed flow of steam and water to pure water flow. The valves concerned include those whose main function is opening, and also those whose main function is the secure closing. There is a certain limitation in that the examples are taken from Boiling Water Reactor technology. The main procedure in valve and system testing described is, of course, not limited to the selected examples, but applies generally in powerstation and process technology. (orig./HAG) [de

  17. Signatures of non-universal large scales in conditional structure functions from various turbulent flows

    International Nuclear Information System (INIS)

    Blum, Daniel B; Voth, Greg A; Bewley, Gregory P; Bodenschatz, Eberhard; Gibert, Mathieu; Xu Haitao; Gylfason, Ármann; Mydlarski, Laurent; Yeung, P K

    2011-01-01

    We present a systematic comparison of conditional structure functions in nine turbulent flows. The flows studied include forced isotropic turbulence simulated on a periodic domain, passive grid wind tunnel turbulence in air and in pressurized SF 6 , active grid wind tunnel turbulence (in both synchronous and random driving modes), the flow between counter-rotating discs, oscillating grid turbulence and the flow in the Lagrangian exploration module (in both constant and random driving modes). We compare longitudinal Eulerian second-order structure functions conditioned on the instantaneous large-scale velocity in each flow to assess the ways in which the large scales affect the small scales in a variety of turbulent flows. Structure functions are shown to have larger values when the large-scale velocity significantly deviates from the mean in most flows, suggesting that dependence on the large scales is typical in many turbulent flows. The effects of the large-scale velocity on the structure functions can be quite strong, with the structure function varying by up to a factor of 2 when the large-scale velocity deviates from the mean by ±2 standard deviations. In several flows, the effects of the large-scale velocity are similar at all the length scales we measured, indicating that the large-scale effects are scale independent. In a few flows, the effects of the large-scale velocity are larger on the smallest length scales. (paper)

  18. Large-scale structures in turbulent Couette flow

    Science.gov (United States)

    Kim, Jung Hoon; Lee, Jae Hwa

    2016-11-01

    Direct numerical simulation of fully developed turbulent Couette flow is performed with a large computational domain in the streamwise and spanwise directions (40 πh and 6 πh) to investigate streamwise-scale growth mechanism of the streamwise velocity fluctuating structures in the core region, where h is the channel half height. It is shown that long streamwise-scale structures (> 3 h) are highly energetic and they contribute to more than 80% of the turbulent kinetic energy and Reynolds shear stress, compared to previous studies in canonical Poiseuille flows. Instantaneous and statistical analysis show that negative-u' structures on the bottom wall in the Couette flow continuously grow in the streamwise direction due to mean shear, and they penetrate to the opposite moving wall. The geometric center of the log layer is observed in the centerline with a dominant outer peak in streamwise spectrum, and the maximum streamwise extent for structure is found in the centerline, similar to previous observation in turbulent Poiseuille flows at high Reynolds number. Further inspection of time-evolving instantaneous fields clearly exhibits that adjacent long structures combine to form a longer structure in the centerline. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2014R1A1A2057031).

  19. An Evaluation Framework for Large-Scale Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas Phillip; Madsen, Ole Brun

    2004-01-01

    structure is a matter of trade-offs between different desired properties, and given a specific case with specific known or expected demands and constraints, the parameters presented will be weighted differently. The decision of such a weighting is supported by a discussion of each parameter. The paper...

  20. LARGE-SCALE FILAMENTARY STRUCTURES AROUND THE VIRGO CLUSTER REVISITED

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Suk; Rey, Soo-Chang; Lee, Youngdae; Lee, Woong; Chung, Jiwon [Department of Astronomy and Space Science, Chungnam National University, 99 Daehak-ro, Daejeon 305-764 (Korea, Republic of); Bureau, Martin [Sub-department of Astrophysics, Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Yoon, Hyein; Chung, Aeree [Department of Astronomy and Yonsei University Observatory, Yonsei University, Seoul 120-749 (Korea, Republic of); Jerjen, Helmut [Research School of Astronomy and Astrophysics, The Australian National University, Cotter Road, Weston, ACT 2611 (Australia); Lisker, Thorsten [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg (ZAH), Mönchhofstraße 12-14, D-69120 Heidelberg (Germany); Jeong, Hyunjin; Sung, Eon-Chang, E-mail: screy@cnu.ac.kr, E-mail: star4citizen@kasi.re.kr [Korea Astronomy and Space Science institute, 776 Daedeokdae-ro, Daejeon 305-348 (Korea, Republic of)

    2016-12-20

    We revisit the filamentary structures of galaxies around the Virgo cluster, exploiting a larger data set, based on the HyperLeda database, than previous studies. In particular, this includes a large number of low-luminosity galaxies, resulting in better sampled individual structures. We confirm seven known structures in the distance range 4  h {sup −1} Mpc < SGY < 16  h {sup −1} Mpc, now identified as filaments, where SGY is the axis of the supergalactic coordinate system roughly along the line of sight. The Hubble diagram of the filament galaxies suggests they are infalling toward the main body of the Virgo cluster. We propose that the collinear distribution of giant elliptical galaxies along the fundamental axis of the Virgo cluster is smoothly connected to two of these filaments (Leo II A and B). Behind the Virgo cluster (16  h {sup −1} Mpc < SGY < 27  h {sup −1} Mpc), we also identify a new filament elongated toward the NGC 5353/4 group (“NGC 5353/4 filament”) and confirm a sheet that includes galaxies from the W and M clouds of the Virgo cluster (“W–M sheet”). In the Hubble diagram, the NGC 5353/4 filament galaxies show infall toward the NGC 5353/4 group, whereas the W–M sheet galaxies do not show hints of gravitational influence from the Virgo cluster. The filamentary structures identified can now be used to better understand the generic role of filaments in the build-up of galaxy clusters at z  ≈ 0.

  1. Electronic Structure of Large-Scale Graphene Nanoflakes

    OpenAIRE

    Hu, Wei; Lin, Lin; Yang, Chao; Yang, Jinlong

    2014-01-01

    With the help of the recently developed SIESTA-PEXSI method [J. Phys.: Condens. Matter \\textbf{26}, 305503 (2014)], we perform Kohn-Sham density functional theory (DFT) calculations to study the stability and electronic structure of hexagonal graphene nanoflakes (GNFs) with up to 11,700 atoms. We find the electronic properties of GNFs, including their cohesive energy, HOMO-LUMO energy gap, edge states and aromaticity, depend sensitively on the type of edges (ACGNFs and ZZGNFs), size and the n...

  2. On Soft Limits of Large-Scale Structure Correlation Functions

    OpenAIRE

    Ben-Dayan, Ido; Konstandin, Thomas; Porto, Rafael A.; Sagunski, Laura

    2014-01-01

    We study soft limits of correlation functions for the density and velocity fields in the theory of structure formation. First, we re-derive the (resummed) consistency conditions at unequal times using the eikonal approximation. These are solely based on symmetry arguments and are therefore universal. Then, we explore the existence of equal-time relations in the soft limit which, on the other hand, depend on the interplay between soft and hard modes. We scrutinize two approaches in the literat...

  3. Reliability analysis of large scaled structures by optimization technique

    International Nuclear Information System (INIS)

    Ishikawa, N.; Mihara, T.; Iizuka, M.

    1987-01-01

    This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)

  4. Large-Scale Unsupervised Hashing with Shared Structure Learning.

    Science.gov (United States)

    Liu, Xianglong; Mu, Yadong; Zhang, Danchen; Lang, Bo; Li, Xuelong

    2015-09-01

    Hashing methods are effective in generating compact binary signatures for images and videos. This paper addresses an important open issue in the literature, i.e., how to learn compact hash codes by enhancing the complementarity among different hash functions. Most of prior studies solve this problem either by adopting time-consuming sequential learning algorithms or by generating the hash functions which are subject to some deliberately-designed constraints (e.g., enforcing hash functions orthogonal to one another). We analyze the drawbacks of past works and propose a new solution to this problem. Our idea is to decompose the feature space into a subspace shared by all hash functions and its complementary subspace. On one hand, the shared subspace, corresponding to the common structure across different hash functions, conveys most relevant information for the hashing task. Similar to data de-noising, irrelevant information is explicitly suppressed during hash function generation. On the other hand, in case that the complementary subspace also contains useful information for specific hash functions, the final form of our proposed hashing scheme is a compromise between these two kinds of subspaces. To make hash functions not only preserve the local neighborhood structure but also capture the global cluster distribution of the whole data, an objective function incorporating spectral embedding loss, binary quantization loss, and shared subspace contribution is introduced to guide the hash function learning. We propose an efficient alternating optimization method to simultaneously learn both the shared structure and the hash functions. Experimental results on three well-known benchmarks CIFAR-10, NUS-WIDE, and a-TRECVID demonstrate that our approach significantly outperforms state-of-the-art hashing methods.

  5. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    International Nuclear Information System (INIS)

    Hoshi, T; Fujiwara, T

    2009-01-01

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  6. The build up of the correlation between halo spin and the large-scale structure

    Science.gov (United States)

    Wang, Peng; Kang, Xi

    2018-01-01

    Both simulations and observations have confirmed that the spin of haloes/galaxies is correlated with the large-scale structure (LSS) with a mass dependence such that the spin of low-mass haloes/galaxies tend to be parallel with the LSS, while that of massive haloes/galaxies tend to be perpendicular with the LSS. It is still unclear how this mass dependence is built up over time. We use N-body simulations to trace the evolution of the halo spin-LSS correlation and find that at early times the spin of all halo progenitors is parallel with the LSS. As time goes on, mass collapsing around massive halo is more isotropic, especially the recent mass accretion along the slowest collapsing direction is significant and it brings the halo spin to be perpendicular with the LSS. Adopting the fractional anisotropy (FA) parameter to describe the degree of anisotropy of the large-scale environment, we find that the spin-LSS correlation is a strong function of the environment such that a higher FA (more anisotropic environment) leads to an aligned signal, and a lower anisotropy leads to a misaligned signal. In general, our results show that the spin-LSS correlation is a combined consequence of mass flow and halo growth within the cosmic web. Our predicted environmental dependence between spin and large-scale structure can be further tested using galaxy surveys.

  7. On the origin of large-scale cosmological structure

    International Nuclear Information System (INIS)

    Fry, J.N.

    1987-01-01

    It should be emphasized that the authors do not know at this point with any certainty what is the ultimate origin of cosmological structure. There is a collection of assumptions that make up a more or less standard model, wherein a broad spectrum of quantum fluctuations from an early epoch, modulated by physical effects that depend on the nature of the dominant component of the mass of the universe, provide the seeds that are amplified by gravitational attraction into the structures that they see today. This at least allows some statement on what this origin is not. Although all of the individual choices involved are relatively plausible, there are many steps along the way, and the resulting construct should by no means be taken to be the only possible version of the truth. The author summarizes the more commonly held beliefs and outlines what has come to be the standard model. This paper outlines main points, with most details left to the references (which also contains some visual representations of the results of numerical simulations

  8. Large-Scale Structure of the Carina Nebula.

    Science.gov (United States)

    Smith; Egan; Carey; Price; Morse; Price

    2000-04-01

    Observations obtained with the Midcourse Space Experiment (MSX) satellite reveal for the first time the complex mid-infrared morphology of the entire Carina Nebula (NGC 3372). On the largest size scale of approximately 100 pc, the thermal infrared emission from the giant H ii region delineates one coherent structure: a (somewhat distorted) bipolar nebula with the major axis perpendicular to the Galactic plane. The Carina Nebula is usually described as an evolved H ii region that is no longer actively forming stars, clearing away the last vestiges of its natal molecular cloud. However, the MSX observations presented here reveal numerous embedded infrared sources that are good candidates for sites of current star formation. Several compact infrared sources are located at the heads of dust pillars or in dark globules behind ionization fronts. Because their morphology suggests a strong interaction with the peculiar collection of massive stars in the nebula, we speculate that these new infrared sources may be sites of triggered star formation in NGC 3372.

  9. The large-scale structure of the universe

    International Nuclear Information System (INIS)

    Silk, J.

    1999-01-01

    The Big Bang is a highly predictive theory, and one that has been systematically refined as the observational data base grows. We assume that the laws an constants of physics are unchanged throughout cosmic time. Einstein's theory of gravitation and the Planck-inspired quantum theory tell us all that we need to know to describe space and time. The local universe is observed to be highly inhomogeneous. Yet if one filters the observed structure, homogeneity appears once the filter bandpass exceeds a few tens of Mpc. The universe is approximately homogeneous. It is also isotropic, there being no apparent preferred direction. Of course, these observations are made from out vantage point. The cosmological principle generalizes the appearance of homogeneity and isotropy to a set of observers distributed through the universe. One motivation behind the cosmological principle is the need to dethrone US as being privileged observers from the vantage point of the earth. The universe is assumed to be statistically isotropic at all times for sets of fundamental observers. One consequence is that the universe must be statistically homogeneous. Observations of the cosmic microwave background have vindicated the cosmological principle, originally applied by Einstein in high first derivation of a static universe, originally applied by Einstein in his first derivation of a static universe. The cosmic microwave background is isotropic to approximately 1 part in 10 5 . It originates from the early universe, and demonstrates that the matter distribution satisfied a similar level of homogeneity during the first million years of cosmic history. (author)

  10. On soft limits of large-scale structure correlation functions

    International Nuclear Information System (INIS)

    Ben-Dayan, Ido; Konstandin, Thomas; Porto, Rafael A.; Sagunski, Laura

    2014-11-01

    We study soft limits of correlation functions for the density and velocity fields in the theory of structure formation. First, we rederive the (resummed) consistency conditions at unequal times using the eikonal approximation. These are solely based on symmetry arguments and are therefore universal. Then, we explore the existence of equal-time relations in the soft limit which, on the other hand, depend on the interplay between soft and hard modes. We scrutinize two approaches in the literature: The time-flow formalism, and a background method where the soft mode is absorbed into a locally curved cosmology. The latter has been recently used to set up (angular averaged) 'equal-time consistency relations'. We explicitly demonstrate that the time-flow relations and 'equal-time consistency conditions' are only fulfilled at the linear level, and fail at next-to-leading order for an Einstein de-Sitter universe. While applied to the velocities both proposals break down beyond leading order, we find that the 'equal-time consistency conditions' quantitatively approximates the perturbative results for the density contrast. Thus, we generalize the background method to properly incorporate the effect of curvature in the density and velocity fluctuations on short scales, and discuss the reasons behind this discrepancy. We conclude with a few comments on practical implementations and future directions.

  11. Auxiliary basis expansions for large-scale electronic structure calculations.

    Science.gov (United States)

    Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin

    2005-05-10

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

  12. PANDA: a Large Scale Multi-Purpose Test Facility for LWR Safety Research

    Energy Technology Data Exchange (ETDEWEB)

    Dreier, Joerg; Paladino, Domenico; Huggenberger, Max; Andreani, Michele [Laboratory for Thermal-Hydraulics, Nuclear Energy and Safety Research Department, Paul Scherrer Institut (PSI), CH-5232 Villigen PSI (Switzerland); Yadigaroglu, George [ETH Zuerich, Technoparkstrasse 1, Einstein 22- CH-8005 Zuerich (Switzerland)

    2008-07-01

    PANDA is a large-scale multi-purpose thermal-hydraulics test facility, built and operated by PSI. Due to its modular structure, PANDA provides flexibility for a variety of applications, ranging from integral containment system investigations, primary system tests, component experiments to large-scale separate-effects tests. For many applications, the experimental results are directly used for example for concept demonstrations or for the characterisation of phenomena or components, but all the experimental data generated in the various test campaigns is unique and was or/and will still be widely used for the validation and improvement of a variety of computer codes, including codes with 3D capabilities, for reactor safety analysis. The paper provides an overview of the already completed and on-going research programs performed in the PANDA facility in the different area of applications, including the main results and conclusions of the investigations. In particular the advanced passive containment cooling system concept investigations of the SBWR, ESBWR as well as of the SWR1000 in relation to various aspects are presented and the main findings are summarised. Finally the goals, planned investigations and expected results of the on-going OECD project SETH-2 are presented. (authors)

  13. PANDA: a Large Scale Multi-Purpose Test Facility for LWR Safety Research

    International Nuclear Information System (INIS)

    Dreier, Joerg; Paladino, Domenico; Huggenberger, Max; Andreani, Michele; Yadigaroglu, George

    2008-01-01

    PANDA is a large-scale multi-purpose thermal-hydraulics test facility, built and operated by PSI. Due to its modular structure, PANDA provides flexibility for a variety of applications, ranging from integral containment system investigations, primary system tests, component experiments to large-scale separate-effects tests. For many applications, the experimental results are directly used for example for concept demonstrations or for the characterisation of phenomena or components, but all the experimental data generated in the various test campaigns is unique and was or/and will still be widely used for the validation and improvement of a variety of computer codes, including codes with 3D capabilities, for reactor safety analysis. The paper provides an overview of the already completed and on-going research programs performed in the PANDA facility in the different area of applications, including the main results and conclusions of the investigations. In particular the advanced passive containment cooling system concept investigations of the SBWR, ESBWR as well as of the SWR1000 in relation to various aspects are presented and the main findings are summarised. Finally the goals, planned investigations and expected results of the on-going OECD project SETH-2 are presented. (authors)

  14. The New Era of Precision Cosmology: Testing Gravity at Large Scales

    Science.gov (United States)

    Prescod-Weinstein, Chanda

    2011-01-01

    Cosmic acceleration may be the biggest phenomenological mystery in cosmology today. Various explanations for its cause have been proposed, including the cosmological constant, dark energy and modified gravities. Structure formation provides a strong test of any cosmic acceleration model because a successful dark energy model must not inhibit the development of observed large-scale structures. Traditional approaches to studies of structure formation in the presence of dark energy ore modified gravity implement the Press & Schechter formalism (PGF). However, does the PGF apply in all cosmologies? The search is on for a better understanding of universality in the PGF In this talk, I explore the potential for universality and talk about what dark matter haloes may be able to tell us about cosmology. I will also discuss the implications of this and new cosmological experiments for better understanding our theory of gravity.

  15. Test of large-scale specimens and models as applied to NPP equipment materials

    International Nuclear Information System (INIS)

    Timofeev, B.T.; Karzov, G.P.

    1993-01-01

    The paper presents the test results on low-cycle fatigue, crack growth rate and fracture toughness of large-scale specimens and structures, manufactured from steel, widely applied in power engineering industry and used for the production of NPP equipment with VVER-440 and VVER-1000 reactors. The obtained results are compared with available test results of standard specimens and calculation relations, accepted in open-quotes Calculation Norms on Strength.close quotes At the fatigue crack initiation stage the experiments were performed on large-scale specimens of various geometry and configuration, which permitted to define 15X2MFA steel fracture initiation resistance by elastic-plastic deformation of large material volume by homogeneous and inhomogeneous state. Besides the above mentioned specimen tests in the regime of low-cycle loading, the test of models with nozzles were performed and a good correlation of the results on fatigue crack initiation criterium was obtained both with calculated data and standard low-cycle fatigue tests. It was noted that on the Paris part of the fatigue fracture diagram a specimen thickness increase does not influence fatigue crack growth resistance by tests in air both at 20 and 350 degrees C. The estimation of the comparability of the results, obtained on specimens and models was also carried out for this stage of fracture. At the stage of unstable crack growth by static loading the experiments were conducted on specimens of various thickness for 15X2MFA and 15X2NMFA steels and their welded joints, produced by submerged arc welding, in as-produced state (the beginning of service) and after embrittling heat treatment, simulating neutron fluence attack (the end of service). The obtained results give evidence of the possibility of the reliable prediction of structure elements brittle fracture using fracture toughness test results on relatively small standard specimens. 35 refs., 23 figs

  16. Co-Cure-Ply Resins for High Performance, Large-Scale Structures

    Data.gov (United States)

    National Aeronautics and Space Administration — Large-scale composite structures are commonly joined by secondary bonding of molded-and-cured thermoset components. This approach may result in unpredictable joint...

  17. Using landscape ecology to test hypotheses about large-scale abundance patterns in migratory birds

    Science.gov (United States)

    Flather, C.H.; Sauer, J.R.

    1996-01-01

    The hypothesis that Neotropical migrant birds may be undergoing widespread declines due to land use activities on the breeding grounds has been examined primarily by synthesizing results from local studies. Growing concern for the cumulative influence of land use activities on ecological systems has heightened the need for large-scale studies to complement what has been observed at local scales. We investigated possible landscape effects on Neotropical migrant bird populations for the eastern United States by linking two large-scale inventories designed to monitor breeding-bird abundances and land use patterns. The null hypothesis of no relation between landscape structure and Neotropical migrant abundance was tested by correlating measures of landscape structure with bird abundance, while controlling for the geographic distance among samples. Neotropical migrants as a group were more 'sensitive' to landscape structure than either temperate migrants or permanent residents. Neotropical migrants tended to be more abundant in landscapes with a greater proportion of forest and wetland habitats, fewer edge habitats, large forest patches, and with forest habitats well dispersed throughout the scene. Permanent residents showed few correlations with landscape structure and temperate migrants were associated with habitat diversity and edge attributes rather than with the amount, size, and dispersion of forest habitats. The association between Neotropical migrant abundance and forest fragmentation differed among physiographic strata, suggesting that land-scape context affects observed relations between bird abundance and landscape structure. Finally, associations between landscape structure and temporal trends in Neotropical migrant abundance were negatively correlated with forest habitats. These results suggest that extrapolation of patterns observed in some landscapes is not likely to hold regionally, and that conservation policies must consider the variation in landscape

  18. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  19. Double inflation: A possible resolution of the large-scale structure problem

    International Nuclear Information System (INIS)

    Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.

    1986-11-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs

  20. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    International Nuclear Information System (INIS)

    Williams, Paul T.; Yin, Shengjun; Klasky, Hilda B.; Bass, Bennett Richard

    2011-01-01

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current status of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite

  1. Decoupling local mechanics from large-scale structure in modular metamaterials

    Science.gov (United States)

    Yang, Nan; Silverberg, Jesse L.

    2017-04-01

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  2. The Rights and Responsibility of Test Takers When Large-Scale Testing Is Used for Classroom Assessment

    Science.gov (United States)

    van Barneveld, Christina; Brinson, Karieann

    2017-01-01

    The purpose of this research was to identify conflicts in the rights and responsibility of Grade 9 test takers when some parts of a large-scale test are marked by teachers and used in the calculation of students' class marks. Data from teachers' questionnaires and students' questionnaires from a 2009-10 administration of a large-scale test of…

  3. Testing, development and demonstration of large scale solar district heating systems

    DEFF Research Database (Denmark)

    Furbo, Simon; Fan, Jianhua; Perers, Bengt

    2015-01-01

    In 2013-2014 the project “Testing, development and demonstration of large scale solar district heating systems” was carried out within the Sino-Danish Renewable Energy Development Programme, the so called RED programme jointly developed by the Chinese and Danish governments. In the project Danish...... know how on solar heating plants and solar heating test technology have been transferred from Denmark to China, large solar heating systems have been promoted in China, test capabilities on solar collectors and large scale solar heating systems have been improved in China and Danish-Chinese cooperation...

  4. Hypersingular integral equations, waveguiding effects in Cantorian Universe and genesis of large scale structures

    International Nuclear Information System (INIS)

    Iovane, G.; Giordano, P.

    2005-01-01

    In this work we introduce the hypersingular integral equations and analyze a realistic model of gravitational waveguides on a cantorian space-time. A waveguiding effect is considered with respect to the large scale structure of the Universe, where the structure formation appears as if it were a classically self-similar random process at all astrophysical scales. The result is that it seems we live in an El Naschie's o (∞) Cantorian space-time, where gravitational lensing and waveguiding effects can explain the appearing Universe. In particular, we consider filamentary and planar large scale structures as possible refraction channels for electromagnetic radiation coming from cosmological structures. From this vision the Universe appears like a large self-similar adaptive mirrors set, thanks to three numerical simulations. Consequently, an infinite Universe is just an optical illusion that is produced by mirroring effects connected with the large scale structure of a finite and not a large Universe

  5. Most experiments done so far with limited plants. Large-scale testing ...

    Indian Academy of Sciences (India)

    First page Back Continue Last page Graphics. Most experiments done so far with limited plants. Large-scale testing needs to be done with objectives such as: Apart from primary transformants, their progenies must be tested. Experiments on segregation, production of homozygous lines, analysis of expression levels in ...

  6. Role of optometry school in single day large scale school vision testing

    Science.gov (United States)

    Anuradha, N; Ramani, Krishnakumar

    2015-01-01

    Background: School vision testing aims at identification and management of refractive errors. Large-scale school vision testing using conventional methods is time-consuming and demands a lot of chair time from the eye care professionals. A new strategy involving a school of optometry in single day large scale school vision testing is discussed. Aim: The aim was to describe a new approach of performing vision testing of school children on a large scale in a single day. Materials and Methods: A single day vision testing strategy was implemented wherein 123 members (20 teams comprising optometry students and headed by optometrists) conducted vision testing for children in 51 schools. School vision testing included basic vision screening, refraction, frame measurements, frame choice and referrals for other ocular problems. Results: A total of 12448 children were screened, among whom 420 (3.37%) were identified to have refractive errors. 28 (1.26%) children belonged to the primary, 163 to middle (9.80%), 129 (4.67%) to secondary and 100 (1.73%) to the higher secondary levels of education respectively. 265 (2.12%) children were referred for further evaluation. Conclusion: Single day large scale school vision testing can be adopted by schools of optometry to reach a higher number of children within a short span. PMID:25709271

  7. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Chacko, Zackaria [Maryland Center for Fundamental Physics, Department of Physics, University of Maryland,Stadium Dr., College Park, MD 20742 (United States); Cui, Yanou [Maryland Center for Fundamental Physics, Department of Physics, University of Maryland,Stadium Dr., College Park, MD 20742 (United States); Department of Physics and Astronomy, University of California-Riverside,University Ave, Riverside, CA 92521 (United States); Perimeter Institute, 31 Caroline Street, North Waterloo, Ontario N2L 2Y5 (Canada); Hong, Sungwoo [Maryland Center for Fundamental Physics, Department of Physics, University of Maryland,Stadium Dr., College Park, MD 20742 (United States); Okui, Takemichi [Department of Physics, Florida State University,College Avenue, Tallahassee, FL 32306 (United States); Tsai, Yuhsinz [Maryland Center for Fundamental Physics, Department of Physics, University of Maryland,Stadium Dr., College Park, MD 20742 (United States)

    2016-12-21

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H{sub 0} and the matter density perturbation σ{sub 8} inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightly coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ{sub 8} problem, while the presence of tightly coupled dark radiation ameliorates the H{sub 0} problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.

  8. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    International Nuclear Information System (INIS)

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; Okui, Takemichi; Tsai, Yuhsinz

    2016-01-01

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightly coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.

  9. Inflation Physics from the Cosmic Microwave Background and Large Scale Structure

    Science.gov (United States)

    Abazajian, K.N.; Arnold,K.; Austermann, J.; Benson, B.A.; Bischoff, C.; Bock, J.; Bond, J.R.; Borrill, J.; Buder, I.; Burke, D.L.; hide

    2013-01-01

    Fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments---the theory of cosmic inflation---and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1 of the sky to a depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5-sigma measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B-mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.

  10. On the universal character of the large scale structure of the universe

    International Nuclear Information System (INIS)

    Demianski, M.; International Center for Relativistic Astrophysics; Rome Univ.; Doroshkevich, A.G.

    1991-01-01

    We review different theories of formation of the large scale structure of the Universe. Special emphasis is put on the theory of inertial instability. We show that for a large class of initial spectra the resulting two point correlation functions are similar. We discuss also the adhesion theory which uses the Burgers equation, Navier-Stokes equation or coagulation process. We review the Zeldovich theory of gravitational instability and discuss the internal structure of pancakes. Finally we discuss the role of the velocity potential in determining the global characteristics of large scale structures (distribution of caustics, scale of voids, etc.). In the last chapter we list the main unsolved problems and main successes of the theory of formation of large scale structure. (orig.)

  11. A Combined Ethical and Scientific Analysis of Large-scale Tests of Solar Climate Engineering

    Science.gov (United States)

    Ackerman, T. P.

    2017-12-01

    Our research group recently published an analysis of the combined ethical and scientific issues surrounding large-scale testing of stratospheric aerosol injection (SAI; Lenferna et al., 2017, Earth's Future). We are expanding this study in two directions. The first is extending this same analysis to other geoengineering techniques, particularly marine cloud brightening (MCB). MCB has substantial differences to SAI in this context because MCB can be tested over significantly smaller areas of the planet and, following injection, has a much shorter lifetime of weeks as opposed to years for SAI. We examine issues such as the role of intent, the lesser of two evils, and the nature of consent. In addition, several groups are currently considering climate engineering governance tools such as a code of ethics and a registry. We examine how these tools might influence climate engineering research programs and, specifically, large-scale testing. The second direction of expansion is asking whether ethical and scientific issues associated with large-scale testing are so significant that they effectively preclude moving ahead with climate engineering research and testing. Some previous authors have suggested that no research should take place until these issues are resolved. We think this position is too draconian and consider a more nuanced version of this argument. We note, however, that there are serious questions regarding the ability of the scientific research community to move to the point of carrying out large-scale tests.

  12. Experimental and numerical modelling of ductile crack propagation in large-scale shell structures

    DEFF Research Database (Denmark)

    Simonsen, Bo Cerup; Törnquist, R.

    2004-01-01

    plastic and controlled conditions. The test specimen can be deformed either in combined in-plane bending and extension or in pure extension. Experimental results are described for 5 and 10 mm thick aluminium and steel plates. By performing an inverse finite-element analysis of the experimental results......This paper presents a combined experimental-numerical procedure for development and calibration of macroscopic crack propagation criteria in large-scale shell structures. A novel experimental set-up is described in which a mode-I crack can be driven 400 mm through a 20(+) mm thick plate under fully...... for steel and aluminium plates, mainly as curves showing the critical element deformation versus the shell element size. These derived crack propagation criteria are then validated against a separate set of experiments considering centre crack specimens (CCS) which have a different crack-tip constraint...

  13. Computational Cosmology: from the Early Universe to the Large Scale Structure

    Directory of Open Access Journals (Sweden)

    Peter Anninos

    1998-09-01

    Full Text Available In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations addressing specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark--hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on thosecalculations designed to test different models of cosmology against the observed Universe.

  14. Computational Cosmology: from the Early Universe to the Large Scale Structure

    Directory of Open Access Journals (Sweden)

    Anninos Peter

    2001-01-01

    Full Text Available In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations (and numerical methods applied to specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.

  15. Computational Cosmology: From the Early Universe to the Large Scale Structure.

    Science.gov (United States)

    Anninos, Peter

    2001-01-01

    In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations (and numerical methods applied to specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.

  16. Initial condition effects on large scale structure in numerical simulations of plane mixing layers

    Science.gov (United States)

    McMullan, W. A.; Garrett, S. J.

    2016-01-01

    In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.

  17. Large scale electronic structure calculations in the study of the condensed phase

    NARCIS (Netherlands)

    van Dam, H.J.J.; Guest, M.F.; Sherwood, P.; Thomas, J.M.H.; van Lenthe, J.H.; van Lingen, J.N.J.; Bailey, C.L.; Bush, I.J.

    2006-01-01

    We consider the role that large-scale electronic structure computations can now play in the modelling of the condensed phase. To structure our analysis, we consider four distict ways in which today's scientific targets can be re-scoped to take advantage of advances in computing resources: 1. time to

  18. Semi-Automated Air-Coupled Impact-Echo Method for Large-Scale Parkade Structure

    Directory of Open Access Journals (Sweden)

    Tyler Epp

    2018-03-01

    Full Text Available Structural Health Monitoring (SHM has moved to data-dense systems, utilizing numerous sensor types to monitor infrastructure, such as bridges and dams, more regularly. One of the issues faced in this endeavour is the scale of the inspected structures and the time it takes to carry out testing. Installing automated systems that can provide measurements in a timely manner is one way of overcoming these obstacles. This study proposes an Artificial Neural Network (ANN application that determines intact and damaged locations from a small training sample of impact-echo data, using air-coupled microphones from a reinforced concrete beam in lab conditions and data collected from a field experiment in a parking garage. The impact-echo testing in the field is carried out in a semi-autonomous manner to expedite the front end of the in situ damage detection testing. The use of an ANN removes the need for a user-defined cutoff value for the classification of intact and damaged locations when a least-square distance approach is used. It is postulated that this may contribute significantly to testing time reduction when monitoring large-scale civil Reinforced Concrete (RC structures.

  19. Primordial Non-Gaussianity in the Large-Scale Structure of the Universe

    Directory of Open Access Journals (Sweden)

    Vincent Desjacques

    2010-01-01

    generated the cosmological fluctuations observed today. Any detection of significant non-Gaussianity would thus have profound implications for our understanding of cosmic structure formation. The large-scale mass distribution in the Universe is a sensitive probe of the nature of initial conditions. Recent theoretical progress together with rapid developments in observational techniques will enable us to critically confront predictions of inflationary scenarios and set constraints as competitive as those from the Cosmic Microwave Background. In this paper, we review past and current efforts in the search for primordial non-Gaussianity in the large-scale structure of the Universe.

  20. Development of a vacuum leak test method for large-scale superconducting magnet test facilities

    International Nuclear Information System (INIS)

    Kawano, Katsumi; Hamada, Kazuya; Okuno, Kiyoshi; Kato, Takashi

    2006-01-01

    Japan Atomic Energy Agency (JAEA) has developed leak detection technology for liquid helium temperature experiments in large-scale superconducting magnet test facilities. In JAEA, a cryosorption pump that uses an absorbent cooled by liquid nitrogen with a conventional helium leak detector, is used to detect helium gas that is leaking from pressurized welded joints of pipes and valves in a vacuum chamber. The cryosorption pump plays the role of decreasing aerial components, such as water, nitrogen and oxygen, to increase the sensitivity of helium leak detection. The established detection sensitivity for helium leak testing is 10 -10 to 10 -9 Pam 3 /s. A total of 850 welded and mechanical joints inside the cryogenic test facility for the ITER Central Solenoid Model Coil (CSMC) experiments have been tested. In the test facility, 73 units of glass fiber-reinforced plastic (GFRP) insulation break are used. The amount of helium permeation through the GFRP was recorded during helium leak testing. To distinguish helium leaks from insulation-break permeation, the helium permeation characteristic of the GFRP part was measured as a function of the time of helium charging. Helium permeation was absorbed at 6 h after helium charging, and the detected permeation is around 10 -7 Pam 3 /s. Using the helium leak test method developed, CSMC experiments have been successfully completed. (author)

  1. Fluid-structure interaction in non-rigid pipeline systems - large scale validation experiments

    International Nuclear Information System (INIS)

    Heinsbroek, A.G.T.J.; Kruisbrink, A.C.H.

    1993-01-01

    The fluid-structure interaction computer code FLUSTRIN, developed by DELFT HYDRAULICS, enables the user to determine dynamic fluid pressures, structural stresses and displacements in a liquid-filled pipeline system under transient conditions. As such, the code is a useful tool to process and mechanical engineers in the safe design and operation of pipeline systems in nuclear power plants. To validate FLUSTRIN, experiments have been performed in a large scale 3D test facility. The test facility consists of a flexible pipeline system which is suspended by wires, bearings and anchors. Pressure surges, which excite the system, are generated by a fast acting shut-off valve. Dynamic pressures, structural displacements and strains (in total 70 signals) have been measured under well determined initial and boundary conditions. The experiments have been simulated with FLUSTRIN, which solves the acoustic equations using the method of characteristics (fluid) and the finite element method (structure). The agreement between experiments and simulations is shown to be good: frequencies, amplitudes and wave phenomena are well predicted by the numerical simulations. It is demonstrated that an uncoupled water hammer computation would render unreliable and useless results. (author)

  2. RELAPS choked flow model and application to a large scale flow test

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1980-01-01

    The RELAP5 code was used to simulate a large scale choked flow test. The fluid system used in the test was modeled in RELAP5 using a uniform, but coarse, nodalization. The choked mass discharge rate was calculated using the RELAP5 choked flow model. The calulations were in good agreement with the test data, and the flow was calculated to be near thermal equilibrium

  3. Large scale identification and categorization of protein sequences using structured logistic regression.

    Directory of Open Access Journals (Sweden)

    Bjørn P Pedersen

    Full Text Available BACKGROUND: Structured Logistic Regression (SLR is a newly developed machine learning tool first proposed in the context of text categorization. Current availability of extensive protein sequence databases calls for an automated method to reliably classify sequences and SLR seems well-suited for this task. The classification of P-type ATPases, a large family of ATP-driven membrane pumps transporting essential cations, was selected as a test-case that would generate important biological information as well as provide a proof-of-concept for the application of SLR to a large scale bioinformatics problem. RESULTS: Using SLR, we have built classifiers to identify and automatically categorize P-type ATPases into one of 11 pre-defined classes. The SLR-classifiers are compared to a Hidden Markov Model approach and shown to be highly accurate and scalable. Representing the bulk of currently known sequences, we analysed 9.3 million sequences in the UniProtKB and attempted to classify a large number of P-type ATPases. To examine the distribution of pumps on organisms, we also applied SLR to 1,123 complete genomes from the Entrez genome database. Finally, we analysed the predicted membrane topology of the identified P-type ATPases. CONCLUSIONS: Using the SLR-based classification tool we are able to run a large scale study of P-type ATPases. This study provides proof-of-concept for the application of SLR to a bioinformatics problem and the analysis of P-type ATPases pinpoints new and interesting targets for further biochemical characterization and structural analysis.

  4. Analysis of the applicability of fracture mechanics on the basis of large scale specimen testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Polachova, H.; Sulc, J.; Anikovskij, V.; Dragunov, Y.; Rivkin, E.; Filatov, V.

    1988-01-01

    The verification is dealt with of fracture mechanics calculations for WWER reactor pressure vessels by large scale model testing performed on the large testing machine ZZ 8000 (maximum load of 80 MN) in the Skoda Concern. The results of testing a large set of large scale test specimens with surface crack-type defects are presented. The nominal thickness of the specimens was 150 mm with defect depths between 15 and 100 mm, the testing temperature varying between -30 and +80 degC (i.e., in the temperature interval of T ko ±50 degC). Specimens with a scale of 1:8 and 1:12 were also tested, as well as standard (CT and TPB) specimens. Comparisons of results of testing and calculations suggest some conservatism of calculations (especially for small defects) based on Linear Elastic Fracture Mechanics, according to the Nuclear Reactor Pressure Vessel Codes which use the fracture mechanics values from J IC testing. On the basis of large scale tests the ''Defect Analysis Diagram'' was constructed and recommended for brittle fracture assessment of reactor pressure vessels. (author). 7 figs., 2 tabs., 3 refs

  5. Model design for Large-Scale Seismic Test Program at Hualien, Taiwan

    International Nuclear Information System (INIS)

    Tang, H.T.; Graves, H.L.; Chen, P.C.

    1991-01-01

    The Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, is a follow-on to the soil-structure interaction (SSI) experiments at Lotung, Taiwan. The planned SSI studies will be performed at a stiff soil site in Hualien, Taiwan, that historically has had slightly more destructive earthquakes in the past than Lotung. The LSST is a joint effort among many interested parties. Electric Power Research Institute (EPRI) and Taipower are the organizers of the program and have the lead in planning and managing the program. Other organizations participating in the LSST program are US Nuclear Regulatory Commission (NRC), the Central Research Institute of Electric Power Industry (CRIEPI), the Tokyo Electric Power Company (TEPCO), the Commissariat A L'Energie Atomique (CEA), Electricite de France (EdF) and Framatome. The LSST was initiated in January 1990, and is envisioned to be five years in duration. Based on the assumption of stiff soil and confirmed by soil boring and geophysical results the test model was designed to provide data needed for SSI studies covering: free-field input, nonlinear soil response, non-rigid body SSI, torsional response, kinematic interaction, spatial incoherency and other effects. Taipower had the lead in design of the test model and received significant input from other LSST members. Questions raised by LSST members were on embedment effects, model stiffness, base shear, and openings for equipment. This paper describes progress in site preparation, design and construction of the model and development of an instrumentation plan

  6. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  7. Time and frequency domain analyses of the Hualien Large-Scale Seismic Test

    International Nuclear Information System (INIS)

    Kabanda, John; Kwon, Oh-Sung; Kwon, Gunup

    2015-01-01

    Highlights: • Time- and frequency-domain analysis methods are verified against each other. • The two analysis methods are validated against Hualien LSST. • The nonlinear time domain (NLTD) analysis resulted in more realistic response. • The frequency domain (FD) analysis shows amplification at resonant frequencies. • The NLTD analysis requires significant modeling and computing time. - Abstract: In the nuclear industry, the equivalent-linear frequency domain analysis method has been the de facto standard procedure primarily due to the method's computational efficiency. This study explores the feasibility of applying the nonlinear time domain analysis method for the soil–structure-interaction analysis of nuclear power facilities. As a first step, the equivalency of the time and frequency domain analysis methods is verified through a site response analysis of one-dimensional soil, a dynamic impedance analysis of soil–foundation system, and a seismic response analysis of the entire soil–structure system. For the verifications, an idealized elastic soil–structure system is used to minimize variables in the comparison of the two methods. Then, the verified analysis methods are used to develop time and frequency domain models of Hualien Large-Scale Seismic Test. The predicted structural responses are compared against field measurements. The models are also analyzed with an amplified ground motion to evaluate discrepancies of the time and frequency domain analysis methods when the soil–structure system behaves beyond the elastic range. The analysis results show that the equivalent-linear frequency domain analysis method amplifies certain frequency bands and tends to result in higher structural acceleration than the nonlinear time domain analysis method. A comparison with field measurements shows that the nonlinear time domain analysis method better captures the frequency distribution of recorded structural responses than the frequency domain

  8. 2MASS Constraints on the Local Large-Scale Structure: A Challenge to LCDM?

    OpenAIRE

    Frith, W. J.; Shanks, T.; Outram, P. J.

    2004-01-01

    We investigate the large-scale structure of the local galaxy distribution using the recently completed 2 Micron All Sky Survey (2MASS). First, we determine the K-band number counts over the 4000 sq.deg. APM survey area where evidence for a large-scale `local hole' has previously been detected and compare them to a homogeneous prediction. Considering a LCDM form for the 2-point angular correlation function, the observed deficiency represents a 5 sigma fluctuation in the galaxy distribution. We...

  9. Hierarchical fiber-optic-based sensing system: impact damage monitoring of large-scale CFRP structures

    International Nuclear Information System (INIS)

    Minakuchi, Shu; Banshoya, Hidehiko; Takeda, Nobuo; Tsukamoto, Haruka

    2011-01-01

    This study proposes a novel fiber-optic-based hierarchical sensing concept for monitoring randomly induced damage in large-scale composite structures. In a hierarchical system, several kinds of specialized devices are hierarchically combined to form a sensing network. Specifically, numerous three-dimensionally structured sensor devices are distributed throughout the whole structural area and connected with an optical fiber network through transducing mechanisms. The distributed devices detect damage, and the fiber-optic network gathers the damage signals and transmits the information to a measuring instrument. This study began by discussing the basic concept of a hierarchical sensing system through comparison with existing fiber-optic-based systems, and an impact damage detection system was then proposed to validate the new concept. The sensor devices were developed based on comparative vacuum monitoring (CVM), and Brillouin-based distributed strain measurement was utilized to identify damaged areas. Verification tests were conducted step-by-step, beginning with a basic test using a single sensor unit, and, finally, the proposed monitoring system was successfully verified using a carbon fiber reinforced plastic (CFRP) fuselage demonstrator. It was clearly confirmed that the hierarchical system has better repairability, higher robustness, and a wider monitorable area compared to existing systems

  10. Large-scale structural alteration of brain in epileptic children with SCN1A mutation

    Directory of Open Access Journals (Sweden)

    Yun-Jeong Lee

    2017-01-01

    Significance: This study showed large-scale developmental brain changes in patients with epilepsy and SCN1A gene mutation, which may be associated with the core symptoms of the patients. Further longitudinal MRI studies with larger cohorts are required to confirm the effect of SCN1A gene mutation on structural brain development.

  11. Hierarchical formation of large scale structures of the Universe: observations and models

    International Nuclear Information System (INIS)

    Maurogordato, Sophie

    2003-01-01

    In this report for an Accreditation to Supervise Research (HDR), the author proposes an overview of her research works in cosmology. These works notably addressed the large scale distribution of the Universe (with constraints on the scenario of formation, and on the bias relationship, and the structuring of clusters), the analysis of galaxy clusters during coalescence, mass distribution within relaxed clusters [fr

  12. Thermal anchoring of wires in large scale superconducting coil test experiment

    International Nuclear Information System (INIS)

    Patel, Dipak; Sharma, A.N.; Prasad, Upendra; Khristi, Yohan; Varmora, Pankaj; Doshi, Kalpesh; Pradhan, S.

    2013-01-01

    Highlights: • We addressed how thermal anchoring in large scale coil test is different compare to small cryogenic apparatus? • We did precise estimation of thermal anchoring length at 77 K and 4.2 K heat sink in large scale superconducting coil test experiment. • We addressed, the quality of anchoring without covering entire wires using Kapton/Teflon tape. • We obtained excellent results in temperature measurement without using GE Varnish by doubling estimated anchoring length. -- Abstract: Effective and precise thermal anchoring of wires in cryogenic experiment is mandatory to measure temperature in milikelvin accuracy and to avoid unnecessary cooling power due to additional heat conduction from room temperature (RT) to operating temperature (OT) through potential, field, displacement and stress measurement instrumentation wires. Instrumentation wires used in large scale superconducting coil test experiments are different compare to cryogenic apparatus in terms of unique construction and overall diameter/area due to errorless measurement in large time-varying magnetic field compare to small cryogenic apparatus, often shielded wires are used. Hence, along with other variables, anchoring techniques and required thermal anchoring length are entirely different in this experiment compare to cryogenic apparatus. In present paper, estimation of thermal anchoring length of five different types of instrumentation wires used in coils test campaign at Institute for Plasma Research (IPR), India has been discussed and some temperature measurement results of coils test campaign have been presented

  13. Galaxies distribution in the universe: large-scale statistics and structures

    International Nuclear Information System (INIS)

    Maurogordato, Sophie

    1988-01-01

    This research thesis addresses the distribution of galaxies in the Universe, and more particularly large scale statistics and structures. Based on an assessment of the main used statistical techniques, the author outlines the need to develop additional tools to correlation functions in order to characterise the distribution. She introduces a new indicator: the probability of a volume randomly tested in the distribution to be void. This allows a characterisation of void properties at the work scales (until 10h"-"1 Mpc) in the Harvard Smithsonian Center for Astrophysics Redshift Survey, or CfA catalog. A systematic analysis of statistical properties of different sub-samples has then been performed with respect to the size and location, luminosity class, and morphological type. This analysis is then extended to different scenarios of structure formation. A program of radial speed measurements based on observations allows the determination of possible relationships between apparent structures. The author also presents results of the search for south extensions of Perseus supernova [fr

  14. Group Centric Networking: Large Scale Over the Air Testing of Group Centric Networking

    Science.gov (United States)

    2016-11-01

    Large Scale Over-the-Air Testing of Group Centric Networking Logan Mercer, Greg Kuperman, Andrew Hunter, Brian Proulx MIT Lincoln Laboratory...performance of Group Centric Networking (GCN), a networking protocol developed for robust and scalable communications in lossy networks where users are...devices, and the ad-hoc nature of the network . Group Centric Networking (GCN) is a proposed networking protocol that addresses challenges specific to

  15. Test methods of total dose effects in very large scale integrated circuits

    International Nuclear Information System (INIS)

    He Chaohui; Geng Bin; He Baoping; Yao Yujuan; Li Yonghong; Peng Honglun; Lin Dongsheng; Zhou Hui; Chen Yusheng

    2004-01-01

    A kind of test method of total dose effects (TDE) is presented for very large scale integrated circuits (VLSI). The consumption current of devices is measured while function parameters of devices (or circuits) are measured. Then the relation between data errors and consumption current can be analyzed and mechanism of TDE in VLSI can be proposed. Experimental results of 60 Co γ TDEs are given for SRAMs, EEPROMs, FLASH ROMs and a kind of CPU

  16. Applicability of laboratory data to large scale tests under dynamic loading conditions

    International Nuclear Information System (INIS)

    Kussmaul, K.; Klenk, A.

    1993-01-01

    The analysis of dynamic loading and subsequent fracture must be based on reliable data for loading and deformation history. This paper describes an investigation to examine the applicability of parameters which are determined by means of small-scale laboratory tests to large-scale tests. The following steps were carried out: (1) Determination of crack initiation by means of strain gauges applied in the crack tip field of compact tension specimens. (2) Determination of dynamic crack resistance curves of CT-specimens using a modified key-curve technique. The key curves are determined by dynamic finite element analyses. (3) Determination of strain-rate-dependent stress-strain relationships for the finite element simulation of small-scale and large-scale tests. (4) Analysis of the loading history for small-scale tests with the aid of experimental data and finite element calculations. (5) Testing of dynamically loaded tensile specimens taken as strips from ferritic steel pipes with a thickness of 13 mm resp. 18 mm. The strips contained slits and surface cracks. (6) Fracture mechanics analyses of the above mentioned tests and of wide plate tests. The wide plates (960x608x40 mm 3 ) had been tested in a propellant-driven 12 MN dynamic testing facility. For calculating the fracture mechanics parameters of both tests, a dynamic finite element simulation considering the dynamic material behaviour was employed. The finite element analyses showed a good agreement with the simulated tests. This prerequisite allowed to gain critical J-integral values. Generally the results of the large-scale tests were conservative. 19 refs., 20 figs., 4 tabs

  17. Large scale sodium-water reaction tests for Monju steam generators

    International Nuclear Information System (INIS)

    Sato, M.; Hiroi, H.; Hori, M.

    1976-01-01

    To demonstrate the safe design of the steam generator system of the prototype fast reactor Monju against the postulated large leak sodium-water reaction, a large scale test facility SWAT-3 was constructed. SWAT-3 is a 1/2.5 scale model of the Monju secondary loop on the basis of the iso-velocity modeling. Two tests have been conducted in SWAT-3 since its construction. The test items using SWAT-3 are discussed, and the description of the facility and the test results are presented

  18. Active self-testing noise measurement sensors for large-scale environmental sensor networks.

    Science.gov (United States)

    Domínguez, Federico; Cuong, Nguyen The; Reinoso, Felipe; Touhafi, Abdellah; Steenhaut, Kris

    2013-12-13

    Large-scale noise pollution sensor networks consist of hundreds of spatially distributed microphones that measure environmental noise. These networks provide historical and real-time environmental data to citizens and decision makers and are therefore a key technology to steer environmental policy. However, the high cost of certified environmental microphone sensors render large-scale environmental networks prohibitively expensive. Several environmental network projects have started using off-the-shelf low-cost microphone sensors to reduce their costs, but these sensors have higher failure rates and produce lower quality data. To offset this disadvantage, we developed a low-cost noise sensor that actively checks its condition and indirectly the integrity of the data it produces. The main design concept is to embed a 13 mm speaker in the noise sensor casing and, by regularly scheduling a frequency sweep, estimate the evolution of the microphone's frequency response over time. This paper presents our noise sensor's hardware and software design together with the results of a test deployment in a large-scale environmental network in Belgium. Our middle-range-value sensor (around €50) effectively detected all experienced malfunctions, in laboratory tests and outdoor deployments, with a few false positives. Future improvements could further lower the cost of our sensor below €10.

  19. The use of test scores from large-scale assessment surveys: psychometric and statistical considerations

    Directory of Open Access Journals (Sweden)

    Henry Braun

    2017-11-01

    Full Text Available Abstract Background Economists are making increasing use of measures of student achievement obtained through large-scale survey assessments such as NAEP, TIMSS, and PISA. The construction of these measures, employing plausible value (PV methodology, is quite different from that of the more familiar test scores associated with assessments such as the SAT or ACT. These differences have important implications both for utilization and interpretation. Although much has been written about PVs, it appears that there are still misconceptions about whether and how to employ them in secondary analyses. Methods We address a range of technical issues, including those raised in a recent article that was written to inform economists using these databases. First, an extensive review of the relevant literature was conducted, with particular attention to key publications that describe the derivation and psychometric characteristics of such achievement measures. Second, a simulation study was carried out to compare the statistical properties of estimates based on the use of PVs with those based on other, commonly used methods. Results It is shown, through both theoretical analysis and simulation, that under fairly general conditions appropriate use of PV yields approximately unbiased estimates of model parameters in regression analyses of large scale survey data. The superiority of the PV methodology is particularly evident when measures of student achievement are employed as explanatory variables. Conclusions The PV methodology used to report student test performance in large scale surveys remains the state-of-the-art for secondary analyses of these databases.

  20. Analysis of ground response data at Lotung large-scale soil- structure interaction experiment site

    International Nuclear Information System (INIS)

    Chang, C.Y.; Mok, C.M.; Power, M.S.

    1991-12-01

    The Electric Power Research Institute (EPRI), in cooperation with the Taiwan Power Company (TPC), constructed two models (1/4-scale and 1/2-scale) of a nuclear plant containment structure at a site in Lotung (Tang, 1987), a seismically active region in northeast Taiwan. The models were constructed to gather data for the evaluation and validation of soil-structure interaction (SSI) analysis methodologies. Extensive instrumentation was deployed to record both structural and ground responses at the site during earthquakes. The experiment is generally referred to as the Lotung Large-Scale Seismic Test (LSST). As part of the LSST, two downhole arrays were installed at the site to record ground motions at depths as well as at the ground surface. Structural response and ground response have been recorded for a number of earthquakes (i.e. a total of 18 earthquakes in the period of October 1985 through November 1986) at the LSST site since the completion of the installation of the downhole instruments in October 1985. These data include those from earthquakes having magnitudes ranging from M L 4.5 to M L 7.0 and epicentral distances range from 4.7 km to 77.7 km. Peak ground surface accelerations range from 0.03 g to 0.21 g for the horizontal component and from 0.01 g to 0.20 g for the vertical component. The objectives of the study were: (1) to obtain empirical data on variations of earthquake ground motion with depth; (2) to examine field evidence of nonlinear soil response due to earthquake shaking and to determine the degree of soil nonlinearity; (3) to assess the ability of ground response analysis techniques including techniques to approximate nonlinear soil response to estimate ground motions due to earthquake shaking; and (4) to analyze earth pressures recorded beneath the basemat and on the side wall of the 1/4 scale model structure during selected earthquakes

  1. Thermal interaction in crusted melt jets with large-scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Sugiyama, Ken-ichiro; Sotome, Fuminori; Ishikawa, Michio [Hokkaido Univ., Sapporo (Japan). Faculty of Engineering

    1998-01-01

    The objective of the present study is to experimentally observe thermal interaction which would be capable of triggering due to entrainment, or entrapment in crusted melt jets with `large-scale structure`. The present experiment was carried out by dropping molten zinc and molten tin of 100 grams, of which mass was sufficient to generate large-scale structures of melt jets. The experimental results show that the thermal interaction of entrapment type occurs in molten-zinc jets with rare probability, and the thermal interaction of entrainment type occurs in molten tin jets with high probability. The difference of thermal interaction between molten zinc and molten tin may attribute to differences of kinematic viscosity and melting point between them. (author)

  2. On the Soft Limit of the Large Scale Structure Power Spectrum: UV Dependence

    CERN Document Server

    Garny, Mathias; Porto, Rafael A; Sagunski, Laura

    2015-01-01

    We derive a non-perturbative equation for the large scale structure power spectrum of long-wavelength modes. Thereby, we use an operator product expansion together with relations between the three-point function and power spectrum in the soft limit. The resulting equation encodes the coupling to ultraviolet (UV) modes in two time-dependent coefficients, which may be obtained from response functions to (anisotropic) parameters, such as spatial curvature, in a modified cosmology. We argue that both depend weakly on fluctuations deep in the UV. As a byproduct, this implies that the renormalized leading order coefficient(s) in the effective field theory (EFT) of large scale structures receive most of their contribution from modes close to the non-linear scale. Consequently, the UV dependence found in explicit computations within standard perturbation theory stems mostly from counter-term(s). We confront a simplified version of our non-perturbative equation against existent numerical simulations, and find good agr...

  3. Identification of Large-Scale Structure Fluctuations in IC Engines using POD-Based Conditional Averaging

    Directory of Open Access Journals (Sweden)

    Buhl Stefan

    2016-01-01

    Full Text Available Cycle-to-Cycle Variations (CCV in IC engines is a well-known phenomenon and the definition and quantification is well-established for global quantities such as the mean pressure. On the other hand, the definition of CCV for local quantities, e.g. the velocity or the mixture distribution, is less straightforward. This paper proposes a new method to identify and calculate cyclic variations of the flow field in IC engines emphasizing the different contributions from large-scale energetic (coherent structures, identified by a combination of Proper Orthogonal Decomposition (POD and conditional averaging, and small-scale fluctuations. Suitable subsets required for the conditional averaging are derived from combinations of the the POD coefficients of the second and third mode. Within each subset, the velocity is averaged and these averages are compared to the ensemble-averaged velocity field, which is based on all cycles. The resulting difference of the subset-average and the global-average is identified as a cyclic fluctuation of the coherent structures. Then, within each subset, remaining fluctuations are obtained from the difference between the instantaneous fields and the corresponding subset average. The proposed methodology is tested for two data sets obtained from scale resolving engine simulations. For the first test case, the numerical database consists of 208 independent samples of a simplified engine geometry. For the second case, 120 cycles for the well-established Transparent Combustion Chamber (TCC benchmark engine are considered. For both applications, the suitability of the method to identify the two contributions to CCV is discussed and the results are directly linked to the observed flow field structures.

  4. Evidence for non-Abelian dark matter from large scale structure?

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    If dark matter multiplicity arises from a weakly coupled non-Abelian dark gauge group the corresponding "dark gluons" can have interesting signatures in cosmology which I will review: 1. the "dark gluons" contribute to the radiation content of the universe and 2. gluon interactions with the dark matter may explain the >3 sigma discrepancy between precision fits to the CMB from Planck and direct measurements of large scale structure in the universe.

  5. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  6. How CMB and large-scale structure constrain chameleon interacting dark energy

    International Nuclear Information System (INIS)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y.

    2015-01-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H 0 tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H 0 value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys

  7. Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations

    Science.gov (United States)

    Gott, J. Richard, III; Cen, Renyue; Ostriker, Jeremiah P.

    1996-07-01

    The topology of large-scale structure is studied as a function of galaxy type using the genus statistic. In hydrodynamical cosmological cold dark matter simulations, galaxies form on caustic surfaces (Zeldovich pancakes) and then slowly drain onto filaments and clusters. The earliest forming galaxies in the simulations (defined as "ellipticals") are thus seen at the present epoch preferentially in clusters (tending toward a meatball topology), while the latest forming galaxies (defined as "spirals") are seen currently in a spongelike topology. The topology is measured by the genus (number of "doughnut" holes minus number of isolated regions) of the smoothed density-contour surfaces. The measured genus curve for all galaxies as a function of density obeys approximately the theoretical curve expected for random- phase initial conditions, but the early-forming elliptical galaxies show a shift toward a meatball topology relative to the late-forming spirals. Simulations using standard biasing schemes fail to show such an effect. Large observational samples separated by galaxy type could be used to test for this effect.

  8. Hierarchical system for autonomous sensing-healing of delamination in large-scale composite structures

    International Nuclear Information System (INIS)

    Minakuchi, Shu; Sun, Denghao; Takeda, Nobuo

    2014-01-01

    This study combines our hierarchical fiber-optic-based delamination detection system with a microvascular self-healing material to develop the first autonomous sensing-healing system applicable to large-scale composite structures. In this combined system, embedded vascular modules are connected through check valves to a surface-mounted supply tube of a pressurized healing agent while fiber-optic-based sensors monitor the internal pressure of these vascular modules. When delamination occurs, the healing agent flows into the vascular modules breached by the delamination and infiltrates the damage for healing. At the same time, the pressure sensors identify the damaged modules by detecting internal pressure changes. This paper begins by describing the basic concept of the combined system and by discussing the advantages that arise from its hierarchical nature. The feasibility of the system is then confirmed through delamination infiltration tests. Finally, the hierarchical system is validated in a plate specimen by focusing on the detection and infiltration of the damage. Its self-diagnostic function is also demonstrated. (paper)

  9. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  10. Simultaneous effect of modified gravity and primordial non-Gaussianity in large scale structure observations

    International Nuclear Information System (INIS)

    Mirzatuny, Nareg; Khosravi, Shahram; Baghram, Shant; Moshafi, Hossein

    2014-01-01

    In this work we study the simultaneous effect of primordial non-Gaussianity and the modification of the gravity in f(R) framework on large scale structure observations. We show that non-Gaussianity and modified gravity introduce a scale dependent bias and growth rate functions. The deviation from ΛCDM in the case of primordial non-Gaussian models is in large scales, while the growth rate deviates from ΛCDM in small scales for modified gravity theories. We show that the redshift space distortion can be used to distinguish positive and negative f NL in standard background, while in f(R) theories they are not easily distinguishable. The galaxy power spectrum is generally enhanced in presence of non-Gaussianity and modified gravity. We also obtain the scale dependence of this enhancement. Finally we define galaxy growth rate and galaxy growth rate bias as new observational parameters to constrain cosmology

  11. Energetics and Structural Characterization of the large-scale Functional Motion of Adenylate Kinase

    Science.gov (United States)

    Formoso, Elena; Limongelli, Vittorio; Parrinello, Michele

    2015-02-01

    Adenylate Kinase (AK) is a signal transducing protein that regulates cellular energy homeostasis balancing between different conformations. An alteration of its activity can lead to severe pathologies such as heart failure, cancer and neurodegenerative diseases. A comprehensive elucidation of the large-scale conformational motions that rule the functional mechanism of this enzyme is of great value to guide rationally the development of new medications. Here using a metadynamics-based computational protocol we elucidate the thermodynamics and structural properties underlying the AK functional transitions. The free energy estimation of the conformational motions of the enzyme allows characterizing the sequence of events that regulate its action. We reveal the atomistic details of the most relevant enzyme states, identifying residues such as Arg119 and Lys13, which play a key role during the conformational transitions and represent druggable spots to design enzyme inhibitors. Our study offers tools that open new areas of investigation on large-scale motion in proteins.

  12. Observing the temperature of the big bang through large scale structure

    Science.gov (United States)

    Ferreira, Pedro G.; Magueijo, João

    2008-09-01

    It is an interesting possibility that the Universe underwent a period of thermal equilibrium at very early times. One expects a residue of this primordial state to be imprinted on the large scale structure of space time. In this paper, we study the morphology of this thermal residue in a universe whose early dynamics is governed by a scalar field. We calculate the amplitude of fluctuations on large scales and compare it with the imprint of vacuum fluctuations. We then use the observed power spectrum of fluctuations on the cosmic microwave background to place a constraint on the temperature of the Universe before and during inflation. We also present an alternative scenario, where the fluctuations are predominantly thermal and near scale-invariant.

  13. Probing cosmology with the homogeneity scale of the Universe through large scale structure surveys

    International Nuclear Information System (INIS)

    Ntelis, Pierros

    2017-01-01

    This thesis exposes my contribution to the measurement of homogeneity scale using galaxies, with the cosmological interpretation of results. In physics, any model is characterized by a set of principles. Most models in cosmology are based on the Cosmological Principle, which states that the universe is statistically homogeneous and isotropic on a large scales. Today, this principle is considered to be true since it is respected by those cosmological models that accurately describe the observations. However, while the isotropy of the universe is now confirmed by many experiments, it is not the case for the homogeneity. To study cosmic homogeneity, we propose to not only test a model but to test directly one of the postulates of modern cosmology. Since 1998 the measurements of cosmic distances using type Ia supernovae, we know that the universe is now in a phase of accelerated expansion. This phenomenon can be explained by the addition of an unknown energy component, which is called dark energy. Since dark energy is responsible for the expansion of the universe, we can study this mysterious fluid by measuring the rate of expansion of the universe. The universe has imprinted in its matter distribution a standard ruler, the Baryon Acoustic Oscillation (BAO) scale. By measuring this scale at different times during the evolution of our universe, it is then possible to measure the rate of expansion of the universe and thus characterize this dark energy. Alternatively, we can use the homogeneity scale to study this dark energy. Studying the homogeneity and the BAO scale requires the statistical study of the matter distribution of the universe at large scales, superior to tens of Mega-parsecs. Galaxies and quasars are formed in the vast over densities of matter and they are very luminous: these sources trace the distribution of matter. By measuring the emission spectra of these sources using large spectroscopic surveys, such as BOSS and eBOSS, we can measure their positions

  14. LARGE-SCALE STRUCTURE OF THE UNIVERSE AS A COSMIC STANDARD RULER

    International Nuclear Information System (INIS)

    Park, Changbom; Kim, Young-Rae

    2010-01-01

    We propose to use the large-scale structure (LSS) of the universe as a cosmic standard ruler. This is possible because the pattern of large-scale distribution of matter is scale-dependent and does not change in comoving space during the linear-regime evolution of structure. By examining the pattern of LSS in several redshift intervals it is possible to reconstruct the expansion history of the universe, and thus to measure the cosmological parameters governing the expansion of the universe. The features of the large-scale matter distribution that can be used as standard rulers include the topology of LSS and the overall shapes of the power spectrum and correlation function. The genus, being an intrinsic topology measure, is insensitive to systematic effects such as the nonlinear gravitational evolution, galaxy biasing, and redshift-space distortion, and thus is an ideal cosmic ruler when galaxies in redshift space are used to trace the initial matter distribution. The genus remains unchanged as far as the rank order of density is conserved, which is true for linear and weakly nonlinear gravitational evolution, monotonic galaxy biasing, and mild redshift-space distortions. The expansion history of the universe can be constrained by comparing the theoretically predicted genus corresponding to an adopted set of cosmological parameters with the observed genus measured by using the redshift-comoving distance relation of the same cosmological model.

  15. Planetary Structures And Simulations Of Large-scale Impacts On Mars

    Science.gov (United States)

    Swift, Damian; El-Dasher, B.

    2009-09-01

    The impact of large meteroids is a possible cause for isolated orogeny on bodies devoid of tectonic activity. On Mars, there is a significant, but not perfect, correlation between large, isolated volcanoes and antipodal impact craters. On Mercury and the Moon, brecciated terrain and other unusual surface features can be found at the antipodes of large impact sites. On Earth, there is a moderate correlation between long-lived mantle hotspots at opposite sides of the planet, with meteoroid impact suggested as a possible cause. If induced by impacts, the mechanisms of orogeny and volcanism thus appear to vary between these bodies, presumably because of differences in internal structure. Continuum mechanics (hydrocode) simulations have been used to investigate the response of planetary bodies to impacts, requiring assumptions about the structure of the body: its composition and temperature profile, and the constitutive properties (equation of state, strength, viscosity) of the components. We are able to predict theoretically and test experimentally the constitutive properties of matter under planetary conditions, with reasonable accuracy. To provide a reference series of simulations, we have constructed self-consistent planetary structures using simplified compositions (Fe core and basalt-like mantle), which turn out to agree surprisingly well with the moments of inertia. We have performed simulations of large-scale impacts, studying the transmission of energy to the antipodes. For Mars, significant antipodal heating to depths of a few tens of kilometers was predicted from compression waves transmitted through the mantle. Such heating is a mechanism for volcanism on Mars, possibly in conjunction with crustal cracking induced by surface waves. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  16. Correlation analysis for forced vibration test of the Hualien large scale seismic test (LSST) program

    International Nuclear Information System (INIS)

    Sugawara, Y.; Sugiyama, T.; Kobayashi, T.; Yamaya, H.; Kitamura, E.

    1995-01-01

    The correlation analysis for a forced vibration test of a 1/4-scale containment SSI test model constructed in Hualien, Taiwan was carried out for the case of after backfilling. Prior to this correlation analysis, the structural properties were revised to adjust the calculated fundamental frequency in the fixed base condition to that derived from the test results. A correlation analysis was carried out using the Lattice Model which was able to estimate the soil-structure effects with embedment. The analysis results coincide well with test results and it is concluded that the mathematical soil-structure interaction model established by the correlation analysis is efficient in estimating the dynamic soil-structure interaction effect with embedment. This mathematical model will be applied as a basic model for simulation analysis of earthquake observation records. (author). 3 refs., 12 figs., 2 tabs

  17. Large-scale tests of aqueous scrubber systems for LMFBR vented containment

    International Nuclear Information System (INIS)

    McCormack, J.D.; Hilliard, R.K.; Postma, A.K.

    1980-01-01

    Six large-scale air cleaning tests performed in the Containment Systems Test Facility (CSTF) are described. The test conditions simulated those postulated for hypothetical accidents in an LMFBR involving containment venting to control hydrogen concentration and containment overpressure. Sodium aerosols were generated by continously spraying sodium into air and adding steam and/or carbon dioxide to create the desired Na 2 O 2 , Na 2 CO 3 or NaOH aerosol. Two air cleaning systems were tested: (a) spray quench chamber, educator venturi scrubber and high efficiency fibrous scrubber in series; and (b) the same except with the spray quench chamber eliminated. The gas flow rates ranged up to 0.8 m 3 /s (1700 acfm) at temperatures to 313 0 C (600 0 F). Quantities of aerosol removed from the gas stream ranged up to 700 kg per test. The systems performed very satisfactorily with overall aerosol mass removal efficiencies exceeding 99.9% in each test

  18. Large scale FCI experiments in subassembly geometry. Test facility and model experiments

    International Nuclear Information System (INIS)

    Beutel, H.; Gast, K.

    A program is outlined for the study of fuel/coolant interaction under SNR conditions. The program consists of a) under water explosion experiments with full size models of the SNR-core, in which the fuel/coolant system is simulated by a pyrotechnic mixture. b) large scale fuel/coolant interaction experiments with up to 5kg of molten UO 2 interacting with liquid sodium at 300 deg C to 600 deg C in a highly instrumented test facility simulating an SNR subassembly. The experimental results will be compared to theoretical models under development at Karlsruhe. Commencement of the experiments is expected for the beginning of 1975

  19. Large scale reflood test with cylindrical core test facility (CCTF). Core I. FY 1979 tests

    International Nuclear Information System (INIS)

    Murao, Yoshio; Akimoto, Hajime; Okubo, Tsutomu; Sudoh, Takashi; Hirano, Kenmei

    1982-03-01

    This report presents the results of analysis of the data obtained in the CCTF Core I test series (19 tests) in FY. 1979 as an interim report. The Analysis of the test results showed that: (1) The present safety evaluation model on the reflood phenomena during LOCA conservatively represents the phenomena observed in the tests except for the downcomer thermohydrodynamic behavior. (2) The downcomer liquid level rose slowly and it took long time for the water to reach a terminal level or the spill-over level. It was presume that such a results was due to an overly conservative selection of the ECC flow rate. This presumption will be checked against a future test result for an increased flow rate. The loop-seal-water filling test was unsuccessful due to a premature power shutdown by the core protection circuit. The test will be conducted again. The tests to be performed in the future are summerized. Tests for investigation of the refill phenomena were also proposed. (author)

  20. On the renormalization of the effective field theory of large scale structures

    International Nuclear Information System (INIS)

    Pajer, Enrico; Zaldarriaga, Matias

    2013-01-01

    Standard perturbation theory (SPT) for large-scale matter inhomogeneities is unsatisfactory for at least three reasons: there is no clear expansion parameter since the density contrast is not small on all scales; it does not fully account for deviations at large scales from a perfect pressureless fluid induced by short-scale non-linearities; for generic initial conditions, loop corrections are UV-divergent, making predictions cutoff dependent and hence unphysical. The Effective Field Theory of Large Scale Structures successfully addresses all three issues. Here we focus on the third one and show explicitly that the terms induced by integrating out short scales, neglected in SPT, have exactly the right scale dependence to cancel all UV-divergences at one loop, and this should hold at all loops. A particularly clear example is an Einstein deSitter universe with no-scale initial conditions P in ∼ k n . After renormalizing the theory, we use self-similarity to derive a very simple result for the final power spectrum for any n, excluding two-loop corrections and higher. We show how the relative importance of different corrections depends on n. For n ∼ −1.5, relevant for our universe, pressure and dissipative corrections are more important than the two-loop corrections

  1. On the renormalization of the effective field theory of large scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Pajer, Enrico [Department of Physics, Princeton University, Princeton, NJ 08544 (United States); Zaldarriaga, Matias, E-mail: enrico.pajer@gmail.com, E-mail: matiasz@ias.edu [Institute for Advanced Study, Princeton, NJ 08544 (United States)

    2013-08-01

    Standard perturbation theory (SPT) for large-scale matter inhomogeneities is unsatisfactory for at least three reasons: there is no clear expansion parameter since the density contrast is not small on all scales; it does not fully account for deviations at large scales from a perfect pressureless fluid induced by short-scale non-linearities; for generic initial conditions, loop corrections are UV-divergent, making predictions cutoff dependent and hence unphysical. The Effective Field Theory of Large Scale Structures successfully addresses all three issues. Here we focus on the third one and show explicitly that the terms induced by integrating out short scales, neglected in SPT, have exactly the right scale dependence to cancel all UV-divergences at one loop, and this should hold at all loops. A particularly clear example is an Einstein deSitter universe with no-scale initial conditions P{sub in} ∼ k{sup n}. After renormalizing the theory, we use self-similarity to derive a very simple result for the final power spectrum for any n, excluding two-loop corrections and higher. We show how the relative importance of different corrections depends on n. For n ∼ −1.5, relevant for our universe, pressure and dissipative corrections are more important than the two-loop corrections.

  2. Large scale gas injection test (Lasgit): Results from two gas injection tests

    International Nuclear Information System (INIS)

    Cuss, R. J.; Harrington, J. F.; Noy, D. J.; Wikman, A.; Sellin, P.

    2011-01-01

    This paper describes the initial results from a large scale gas injection test (Lasgit) performed at the Aespoe Hard Rock Laboratory (Sweden)). Lasgit is a full-scale field-scale experiment based on the Swedish KBS-3V repository concept, examining the processes controlling gas and water flow in compact buffer bentonite. The first 2 years of the test focused on the artificial hydration of the bentonite buffer. This was followed by a programme of hydraulic and gas injection tests which ran from day 843 to 1110. A further period of artificial hydration occurred from day 1110 to 1385, followed by a more complex programme of gas injection testing which remains on going (day 1385+). After 2 years of hydration, hydraulic conductivity and specific storage values in the lower filter array were found to range from 9 x 10 -14 to 1.6 x 10 -13 m/s and 5.5 x 10 -5 to 4.4 x 10 -4 m -1 respectively, with the injection filter FL903 yielding values of 7.5 x 10 -14 m/s and 2.5 x 10 -5 m -1 . A second set of hydraulic measurements were performed over 1 year and a half later yielding similar values, in the range 7.8 x 10 -14 m/s and 1.3 x 10 -13 m/s. The hydraulic conductivity of FL903 had reduced slightly to 5.3 x 10 -14 m/s while specific storage had increased to 4.0 x 10 -5 m -1 . Both datasets agree with laboratory values performed on small-scale saturated samples. Two sets of gas injection tests were performed over a 3 year period. During the course of testing, gas entry pressure was found to increase from around 650 kPa to approximately 1.3 MPa, indicative of the maturation of the clay. The sequential reduction in volumetric flow rate and lack of correlation between the rate of gas inflow and the gas pressure gradient observed during constant pressure steps prior to major gas entry, is suggestive of a reduction in gas permeability of the buffer and indicates only limited quantities of gas can be injected into the clay without interacting with the continuum stress field. Major gas

  3. In situ vitrification: Preliminary results from the first large-scale radioactive test

    International Nuclear Information System (INIS)

    Buelt, J.L.; Westsik, J.H.

    1988-02-01

    The first large-scale radioactive test (LSRT) of In Situ Vitrification (ISV) has been completed. In Situ Vitrification is a process whereby joule heating immobilizes contaminated soil in place by converting it to a durable glass and crystalline waste form. The LSRT was conducted at an actual transuranic contaminated soil site on the Department of Energy's Hanford Site. The test had two objectives: (1) determine large-scale processing performance and (2) produce a waste form that can be fully evaluated as a potential technique for the final disposal of transuranic-contaminated soil sites at Hanford. This accomplishment has provided technical data to evaluate the ISV process for its potential in the final disposition of transuranic-contaminated soil sites at Hanford. Because of the test's successful completion, within a year technical data on the vitrified soil will be available to determine how well the process incorporates transuranics into the waste form and how well the form resists leaching of transuranics. Preliminary results available include retention of transuranics and other elements within the waste form during processing and the efficiency of the off-gas treatment system in removing contaminants from the gaseous effluents. 13 refs., 10 figs., 5 tabs

  4. Novel material and structural design for large-scale marine protective devices

    International Nuclear Information System (INIS)

    Qiu, Ang; Lin, Wei; Ma, Yong; Zhao, Chengbi; Tang, Youhong

    2015-01-01

    Highlights: • Large-scale protective devices with different structural designs have been optimized. • Large-scale protective devices with novel material designs have been optimized. • Protective devices constructed of sandwich panels have the best anti-collision performance. • Protective devices with novel material design can reduce weight and construction cost. - Abstract: Large-scale protective devices must endure the impact of severe forces, large structural deformation, the increased stress and strain rate effects, and multiple coupling effects. In evaluation of the safety of conceptual design through simulation, several key parameters considered in this research are maximum impact force, energy dissipated by the impactor (e.g. a ship) and energy absorbed by the device and the impactor stroke. During impact, the main function of the ring beam structure is to resist and buffer the impact force between ship and bridge pile caps, which could guarantee that the magnitude of impact force meets the corresponding requirements. The means of improving anti-collision performance can be to increase the strength of the beam section or to exchange the steel material with novel fiber reinforced polymer laminates. The main function of the buoyancy tank is to absorb and transfer the ship’s kinetic energy through large plastic deformation, damage, or friction occurring within itself. The energy absorption effect can be improved by structure optimization or by the use of new sandwich panels. Structural and material optimization schemes are proposed on the basis of conceptual design in this research, and protective devices constructed of sandwich panels prove to have the best anti-collision performance

  5. Proceedings of the Joint IAEA/CSNI Specialists' Meeting on Fracture Mechanics Verification by Large-Scale Testing

    International Nuclear Information System (INIS)

    1993-10-01

    This report provides the proceedings of a Specialists' Meeting on Fracture Mechanics Verification by Large-Scale Testing that was held in Oak Ridge, Tennessee, on October 23-25, 1992. The meeting was jointly sponsored by the International Atomic Energy Agency (IAEA) and the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development. In particular, the International Working Group (IWG) on Life Management of Nuclear Power Plants (LMNPP) was the IAEA sponsor, and the Principal Working Group 3 (PWG-3) (Primary System Component Integrity) of the Committee for the Safety of Nuclear Installations (CSNI) was the NEA's sponsor. This meeting was preceded by two prior international activities that were designed to examine the state-of-the-art in fracture analysis capabilities and emphasized applications to the safety evaluation of nuclear power facilities. The first of those two activities was an IAEA Specialists' Meeting on Fracture Mechanics Verification by Large-Scale Testing that was held at the Staatliche Materialprufungsanstalt (MPA) in Stuttgart, Germany, on May 25-27, 1988; the proceedings of that meeting were published 1991.1 The second activity was the CSNI/PWG-3's Fracture Assessment Group's Project FALSIRE (Fracture Analyses of Large-Scale International Reference Experiments). The proceedings of the FALSIRE workshop that was held in Boston, Massachusetts, U.S.A., on May 8-10, 1990, was recently published by the Oak Ridge National Laboratory (ORNL). Those previous activities identified capabilities and shortcomings of various fracture analysis methods based on analyses of six available large-scale experiments. Different modes of fracture behavior, which ranged from brittle to ductile, were considered. In addition, geometry, size, constraint and multiaxial effects were considered. While generally good predictive capabilities were demonstrated for brittle fracture, issues were identified relative to predicting fracture behavior at higher

  6. ROSA-IV Large Scale Test Facility (LSTF) system description for second simulated fuel assembly

    International Nuclear Information System (INIS)

    1990-10-01

    The ROSA-IV Program's Large Scale Test Facility (LSTF) is a test facility for integral simulation of thermal-hydraulic response of a pressurized water reactor (PWR) during small break loss-of-coolant accidents (LOCAs) and transients. In this facility, the PWR core nuclear fuel rods are simulated using electric heater rods. The simulated fuel assembly which was installed during the facility construction was replaced with a new one in 1988. The first test with this second simulated fuel assembly was conducted in December 1988. This report describes the facility configuration and characteristics as of this date (December 1988) including the new simulated fuel assembly design and the facility changes which were made during the testing with the first assembly as well as during the renewal of the simulated fuel assembly. (author)

  7. In Situ Vitrification preliminary results from the first large-scale radioactive test

    International Nuclear Information System (INIS)

    Buelt, J.L.; Westsik, J.H.

    1988-01-01

    The first large-scale radioactive test (LSRT) of In Situ Vitrification (ISV) has been completed. In Situ Vitrification is a process whereby joule heating immobilizes contaminated soil in place by converting it to a durable glass and crystalline waste form. The LSRT was conducted at an actual transuranic contaminated soil site on the Department of Energy's Hanford Site. The test had two objectives: 1) determine large-scale processing performance and 2) produce a waste form that can be fully evaluated as a potential technique for the final disposal of transuranic-contaminated soil sites at Hanford. This accomplishment has provided technical data to evaluate the ISV process for its potential in the final disposition of transuranic-contaminated soil sites at Hanford. The LSRT was completed in June 1987 after 295 hours of operation and 460 MWh of electrical energy dissipated to the molten soil. This resulted in a minimum of a 450-t block of vitrified soil extending to a depth of 7.3m (24 ft). The primary contaminants vitrified during the demonstration were Pu and Am transuranics, but also included up to 26,000 ppm fluorides. Preliminary data show that their retention in the vitrified product exceeded predictions meaning that fewer contaminants needed to be removed from the gaseous effluents by the processing equipment. The gaseous effluents were contained and treated throughout the run; that is, no radioactive or hazardous chemical releases were detected

  8. Re-evaluation of the 1995 Hanford Large Scale Drum Fire Test Results

    International Nuclear Information System (INIS)

    Yang, J M

    2007-01-01

    A large-scale drum performance test was conducted at the Hanford Site in June 1995, in which over one hundred (100) 55-gal drums in each of two storage configurations were subjected to severe fuel pool fires. The two storage configurations in the test were pallet storage and rack storage. The description and results of the large-scale drum test at the Hanford Site were reported in WHC-SD-WM-TRP-246, ''Solid Waste Drum Array Fire Performance,'' Rev. 0, 1995. This was one of the main references used to develop the analytical methodology to predict drum failures in WHC-SD-SQA-ANAL-501, 'Fire Protection Guide for Waste Drum Storage Array,'' September 1996. Three drum failure modes were observed from the test reported in WHC-SD-WM-TRP-246. They consisted of seal failure, lid warping, and catastrophic lid ejection. There was no discernible failure criterion that distinguished one failure mode from another. Hence, all three failure modes were treated equally for the purpose of determining the number of failed drums. General observations from the results of the test are as follows: (lg b ullet) Trash expulsion was negligible. (lg b ullet) Flame impingement was identified as the main cause for failure. (lg b ullet) The range of drum temperatures at failure was 600 C to 800 C. This is above the yield strength temperature for steel, approximately 540 C (1,000 F). (lg b ullet) The critical heat flux required for failure is above 45 kW/m 2 . (lg b ullet) Fire propagation from one drum to the next was not observed. The statistical evaluation of the test results using, for example, the student's t-distribution, will demonstrate that the failure criteria for TRU waste drums currently employed at nuclear facilities are very conservative relative to the large-scale test results. Hence, the safety analysis utilizing the general criteria described in the five bullets above will lead to a technically robust and defensible product that bounds the potential consequences from postulated

  9. Testing and qualification of CIRCE venturi-nozzle flow meter for large scale experiments

    International Nuclear Information System (INIS)

    Ambrosini, W.; Forgione, N.; Oriolo, F.; Tarantino, M.; Agostini, P.; Benamati, G.; Bertacci, G.; Elmi, N.; Alemberti, A.; Cinotti, L.; Scaddozzo, G.

    2005-01-01

    This paper is focused on the tests carried out at the ENEA Brasimone Centre for the qualification of a large Venturi-Nozzle flow meter operating in Lead Bismuth Eutectic (LBE). Such flow meter has been selected to provide flow rate measurements during the thermal-hydraulic tests that will be performed on the experimental facility CIRCE. This large-scale facility is installed at the ENEA Brasimone Centre for studying the fluid-dynamics and operating behaviour of ADS reactor plants, as well as to qualify several components intended to be used in the LBE technology. The Venturi-Nozzle flow meter has been supplied by the Euromisure s.r.l., together with the calculated theoretical characteristic equation. The results obtained by the tests performed allowed to qualify this theoretical curve supplied by the manufacturer, that presents a very good agreement especially at high flow rate values. (authors)

  10. Isocurvature modes and Baryon Acoustic Oscillations II: gains from combining CMB and Large Scale Structure

    International Nuclear Information System (INIS)

    Carbone, Carmelita; Mangilli, Anna; Verde, Licia

    2011-01-01

    We consider cosmological parameters estimation in the presence of a non-zero isocurvature contribution in the primordial perturbations. A previous analysis showed that even a tiny amount of isocurvature perturbation, if not accounted for, could affect standard rulers calibration from Cosmic Microwave Background observations such as those provided by the Planck mission, affect Baryon Acoustic Oscillations interpretation, and introduce biases in the recovered dark energy properties that are larger than forecasted statistical errors from future surveys. Extending on this work, here we adopt a general fiducial cosmology which includes a varying dark energy equation of state parameter and curvature. Beside Baryon Acoustic Oscillations measurements, we include the information from the shape of the galaxy power spectrum and consider a joint analysis of a Planck-like Cosmic Microwave Background probe and a future, space-based, Large Scale Structure probe not too dissimilar from recently proposed surveys. We find that this allows one to break the degeneracies that affect the Cosmic Microwave Background and Baryon Acoustic Oscillations combination. As a result, most of the cosmological parameter systematic biases arising from an incorrect assumption on the isocurvature fraction parameter f iso , become negligible with respect to the statistical errors. We find that the Cosmic Microwave Background and Large Scale Structure combination gives a statistical error σ(f iso ) ∼ 0.008, even when curvature and a varying dark energy equation of state are included, which is smaller that the error obtained from Cosmic Microwave Background alone when flatness and cosmological constant are assumed. These results confirm the synergy and complementarity between Cosmic Microwave Background and Large Scale Structure, and the great potential of future and planned galaxy surveys

  11. A new method to determine large scale structure from the luminosity distance

    International Nuclear Information System (INIS)

    Romano, Antonio Enea; Chiang, Hsu-Wen; Chen, Pisin

    2014-01-01

    The luminosity distance can be used to determine the properties of large scale structure around the observer. To this purpose we develop a new inversion method to map luminosity distance to a Lemaitre–Tolman–Bondi (LTB) metric based on the use of the exact analytical solution for Einstein equations. The main advantages of this approach are an improved numerical accuracy and stability, an exact analytical setting of the initial conditions for the differential equations which need to be solved and the validity for any sign of the functions determining the LTB geometry. Given the fully analytical form of the differential equations, this method also simplifies the calculation of the red-shift expansion around the apparent horizon point where the numerical solution becomes unstable. We test the method by inverting the supernovae Ia luminosity distance function corresponding to the best fit ΛCDM model. We find that only a limited range of initial conditions is compatible with observations, or a transition from red to blue-shift can occur at relatively low red-shift. Despite LTB solutions without a cosmological constant have been shown not to be compatible with all different set of available observational data, those studies normally fit data assuming a special functional ansatz for the inhomogeneity profile, which often depend only on few parameters. Inversion methods on the contrary are able to fully explore the freedom in fixing the functions which determine a LTB solution. Another important possible application is not about LTB solutions as cosmological models, but rather as tools to study the effects on the observations made by a generic observer located in an inhomogeneous region of the Universe where a fully non perturbative treatment involving exact solutions of Einstein equations is required. (paper)

  12. LUMINOUS RED GALAXY HALO DENSITY FIELD RECONSTRUCTION AND APPLICATION TO LARGE-SCALE STRUCTURE MEASUREMENTS

    International Nuclear Information System (INIS)

    Reid, Beth A.; Spergel, David N.; Bode, Paul

    2009-01-01

    The nontrivial relationship between observations of galaxy positions in redshift space and the underlying matter field complicates our ability to determine the linear theory power spectrum and extract cosmological information from galaxy surveys. The Sloan Digital Sky Survey (SDSS) luminous red galaxy (LRG) catalog has the potential to place powerful constraints on cosmological parameters. LRGs are bright, highly biased tracers of large-scale structure. However, because they are highly biased, the nonlinear contribution of satellite galaxies to the galaxy power spectrum is large and fingers-of-God (FOGs) are significant. The combination of these effects leads to a ∼10% correction in the underlying power spectrum at k = 0.1 h Mpc -1 and ∼40% correction at k = 0.2 h Mpc -1 in the LRG P(k) analysis of Tegmark et al., thereby compromising the cosmological constraints when this potentially large correction is left as a free parameter. We propose an alternative approach to recovering the matter field from galaxy observations. Our approach is to use halos rather than galaxies to trace the underlying mass distribution. We identify FOGs and replace each FOG with a single halo object. This removes the nonlinear contribution of satellite galaxies, the one-halo term. We test our method on a large set of high-fidelity mock SDSS LRG catalogs and find that the power spectrum of the reconstructed halo density field deviates from the underlying matter power spectrum at the ≤1% level for k ≤ 0.1 h Mpc -1 and ≤4% at k = 0.2 h Mpc -1 . The reconstructed halo density field also removes the bias in the measurement of the redshift space distortion parameter β induced by the FOG smearing of the linear redshift space distortions.

  13. How CMB and large-scale structure constrain chameleon interacting dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Boriero, Daniel [Fakultät für Physik, Universität Bielefeld, Universitätstr. 25, Bielefeld (Germany); Das, Subinoy [Indian Institute of Astrophisics, Bangalore, 560034 (India); Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2015-07-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.

  14. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    1997-01-01

    This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical

  15. Cosmological special relativity the large scale structure of space, time and velocity

    CERN Document Server

    Carmeli, Moshe

    2002-01-01

    This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g

  16. Measures of large-scale structure in the CfA redshift survey slices

    International Nuclear Information System (INIS)

    De Lapparent, V.; Geller, M.J.; Huchra, J.P.

    1991-01-01

    Variations of the counts-in-cells with cell size are used here to define two statistical measures of large-scale clustering in three 6 deg slices of the CfA redshift survey. A percolation criterion is used to estimate the filling factor which measures the fraction of the total volume in the survey occupied by the large-scale structures. For the full 18 deg slice of the CfA redshift survey, f is about 0.25 + or - 0.05. After removing groups with more than five members from two of the slices, variations of the counts in occupied cells with cell size have a power-law behavior with a slope beta about 2.2 on scales from 1-10/h Mpc. Application of both this statistic and the percolation analysis to simulations suggests that a network of two-dimensional structures is a better description of the geometry of the clustering in the CfA slices than a network of one-dimensional structures. Counts-in-cells are also used to estimate at 0.3 galaxy h-squared/Mpc the average galaxy surface density in sheets like the Great Wall. 46 refs

  17. A correlation between the cosmic microwave background and large-scale structure in the Universe.

    Science.gov (United States)

    Boughn, Stephen; Crittenden, Robert

    2004-01-01

    Observations of distant supernovae and the fluctuations in the cosmic microwave background (CMB) indicate that the expansion of the Universe may be accelerating under the action of a 'cosmological constant' or some other form of 'dark energy'. This dark energy now appears to dominate the Universe and not only alters its expansion rate, but also affects the evolution of fluctuations in the density of matter, slowing down the gravitational collapse of material (into, for example, clusters of galaxies) in recent times. Additional fluctuations in the temperature of CMB photons are induced as they pass through large-scale structures and these fluctuations are necessarily correlated with the distribution of relatively nearby matter. Here we report the detection of correlations between recent CMB data and two probes of large-scale structure: the X-ray background and the distribution of radio galaxies. These correlations are consistent with those predicted by dark energy, indicating that we are seeing the imprint of dark energy on the growth of structure in the Universe.

  18. Structure of exotic nuclei by large-scale shell model calculations

    International Nuclear Information System (INIS)

    Utsuno, Yutaka; Otsuka, Takaharu; Mizusaki, Takahiro; Honma, Michio

    2006-01-01

    An extensive large-scale shell-model study is conducted for unstable nuclei around N = 20 and N = 28, aiming to investigate how the shell structure evolves from stable to unstable nuclei and affects the nuclear structure. The structure around N = 20 including the disappearance of the magic number is reproduced systematically, exemplified in the systematics of the electromagnetic moments in the Na isotope chain. As a key ingredient dominating the structure/shell evolution in the exotic nuclei from a general viewpoint, we pay attention to the tensor force. Including a proper strength of the tensor force in the effective interaction, we successfully reproduce the proton shell evolution ranging from N = 20 to 28 without any arbitrary modifications in the interaction and predict the ground state of 42Si to contain a large deformed component

  19. A large-scale radiometric micro-quantitative complement fixation test for serum antibody titration

    International Nuclear Information System (INIS)

    Bengali, Z.H.; Levine, P.H.; Das, S.R.

    1980-01-01

    A micro-quantitative complement fixation (CF) procedure based on 51 Cr release is described. The method employs 50% hemolysis as end point and the alternation equation to calculate the amount of complement involved in the hemolytic reaction. Compared to the conventional CF tests, the radiometric procedure described here is very precise and consistently reproducible. Also, since only 3 4-fold dilutions of sera are used for the titration of antibodies over a wide range of concentrations, the test is very concise and is economical to perform. Its format is amenable to automation and computerization. This radioimetric CF procedure is thus most useful for large-scale immunological research and epidemiological surveilance studies. (Auth.)

  20. Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration

    Energy Technology Data Exchange (ETDEWEB)

    Anefalos Pereira, S.; Lucherini, V.; Mirazita, M.; Orlandi, A.; Orecchini, D.; Pisano, S.; Tomassini, S.; Viticchie, A. [Laboratori Nazionali di Frascati, INFN, Frascati (Italy); Baltzell, N.; El Alaoui, A.; Hafidi, K. [Physics Division, Argonne National Laboratory, Argonne, IL (United States); Barion, L.; Contalbrigo, M.; Malaguti, R.; Movsisyan, A.; Pappalardo, L.L.; Squerzanti, S. [INFN, Ferrara (Italy); Benmokhtar, F. [Department of Physics, Duquesne University, Pittsburgh, PA (United States); Brooks, W. [Universidad Tecnica Federico Santa Maria, Valparaiso (Chile); Cisbani, E. [Gruppo Sanita and Istituto Superiore di Sanita, INFN, Rome (Italy); Hoek, M.; Phillips, J. [School of Physics and Astronomy, Kelvin Building, University of Glasgow, Scotland (United Kingdom); Kubarovsky, V. [Thomas Jefferson National Accelerator Facility, Jefferson Laboratory, Newport News, VA (United States); Lagamba, L.; Perrino, R. [INFN, Bari (Italy); Montgomery, R.A. [Laboratori Nazionali di Frascati, INFN, Frascati (Italy); School of Physics and Astronomy, Kelvin Building, University of Glasgow, Scotland (United Kingdom); Musico, P. [INFN, Genova (Italy); Rossi, P. [Laboratori Nazionali di Frascati, INFN, Frascati (Italy); Thomas Jefferson National Accelerator Facility, Jefferson Laboratory, Newport News, VA (United States); Turisini, M. [INFN, Ferrara (Italy); Universidad Tecnica Federico Santa Maria, Valparaiso (Chile)

    2016-02-15

    A large-area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3GeV/c up to 8GeV/c for the CLAS12 experiment at the upgraded 12GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and highly packed and highly segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large-angle tracks). We report here the results of the tests of a large-scale prototype of the RICH detector performed with the hadron beam of the CERN T9 experimental hall for the direct detection configuration. The tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1: 500 in the whole momentum range. (orig.)

  1. The prospect of modern thermomechanics in structural integrity calculations of large-scale pressure vessels

    Science.gov (United States)

    Fekete, Tamás

    2018-05-01

    Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well

  2. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    Energy Technology Data Exchange (ETDEWEB)

    Hoshi, T [Department of Applied Mathematics and Physics, Tottori University, Tottori 680-8550 (Japan); Fujiwara, T [Core Research for Evolutional Science and Technology, Japan Science and Technology Agency (CREST-JST) (Japan)

    2009-02-11

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  3. On the hydroelastic response of box-shaped floating structure with shallow draft. Tank test with large scale model; Senkitsusui hakogata futai no harochu dansei oto ni tsuite. Ogata mokei ni yoru suiso shiken

    Energy Technology Data Exchange (ETDEWEB)

    Yago, K; Endo, H [Ship Research Inst., Tokyo (Japan)

    1997-12-31

    The hydroelastic response test was carried out in waves using an approximately 10m long large model, and the numerical analysis was done by the direct method, for a commercial-size (300m long) box-shaped floating structure with shallow draft. The scale ratio of the model is 1/30.8, and the minimum wave cycle is around 0.7s from wave-making capacity of the tank, which corresponds to 4 to 14s with the commercial-size structure. Elastic displacement and bending strain were measured. The calculated results by the direct method are in good agreement with the observed results. The fluid dynamic mutual interference effects between elements are weak in added mass but strong in damping force, indicating that the range of mutual interference is strongly related to rolling cycle in the range of mutual interference. Wave pressure on the floating structure bottom is high at the upper side of the wave, greatly damping towards the downside of the wave. However, response amplitude of elastic displacement tends to increase at the ends, both in upside and downside of the wave. For the floating structure studied, the 0 to 4th mode components are predominant in longitudinal waves, and the 6th or higher mode components are negligibly low. 21 refs., 15 figs., 2 tabs.

  4. On the hydroelastic response of box-shaped floating structure with shallow draft. Tank test with large scale model; Senkitsusui hakogata futai no harochu dansei oto ni tsuite. Ogata mokei ni yoru suiso shiken

    Energy Technology Data Exchange (ETDEWEB)

    Yago, K.; Endo, H. [Ship Research Inst., Tokyo (Japan)

    1996-12-31

    The hydroelastic response test was carried out in waves using an approximately 10m long large model, and the numerical analysis was done by the direct method, for a commercial-size (300m long) box-shaped floating structure with shallow draft. The scale ratio of the model is 1/30.8, and the minimum wave cycle is around 0.7s from wave-making capacity of the tank, which corresponds to 4 to 14s with the commercial-size structure. Elastic displacement and bending strain were measured. The calculated results by the direct method are in good agreement with the observed results. The fluid dynamic mutual interference effects between elements are weak in added mass but strong in damping force, indicating that the range of mutual interference is strongly related to rolling cycle in the range of mutual interference. Wave pressure on the floating structure bottom is high at the upper side of the wave, greatly damping towards the downside of the wave. However, response amplitude of elastic displacement tends to increase at the ends, both in upside and downside of the wave. For the floating structure studied, the 0 to 4th mode components are predominant in longitudinal waves, and the 6th or higher mode components are negligibly low. 21 refs., 15 figs., 2 tabs.

  5. Time-sliced perturbation theory for large scale structure I: general formalism

    Energy Technology Data Exchange (ETDEWEB)

    Blas, Diego; Garny, Mathias; Sibiryakov, Sergey [Theory Division, CERN, CH-1211 Genève 23 (Switzerland); Ivanov, Mikhail M., E-mail: diego.blas@cern.ch, E-mail: mathias.garny@cern.ch, E-mail: mikhail.ivanov@cern.ch, E-mail: sergey.sibiryakov@cern.ch [FSB/ITP/LPPC, École Polytechnique Fédérale de Lausanne, CH-1015, Lausanne (Switzerland)

    2016-07-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.

  6. On the soft limit of the large scale structure power spectrum. UV dependence

    International Nuclear Information System (INIS)

    Garny, Mathias

    2015-08-01

    We derive a non-perturbative equation for the large scale structure power spectrum of long-wavelength modes. Thereby, we use an operator product expansion together with relations between the three-point function and power spectrum in the soft limit. The resulting equation encodes the coupling to ultraviolet (UV) modes in two time-dependent coefficients, which may be obtained from response functions to (anisotropic) parameters, such as spatial curvature, in a modified cosmology. We argue that both depend weakly on fluctuations deep in the UV. As a byproduct, this implies that the renormalized leading order coefficient(s) in the effective field theory (EFT) of large scale structures receive most of their contribution from modes close to the non-linear scale. Consequently, the UV dependence found in explicit computations within standard perturbation theory stems mostly from counter-term(s). We confront a simplified version of our non-perturbative equation against existent numerical simulations, and find good agreement within the expected uncertainties. Our approach can in principle be used to precisely infer the relevance of the leading order EFT coefficient(s) using small volume simulations in an 'anisotropic separate universe' framework. Our results suggest that the importance of these coefficient(s) is a ∝ 10% effect, and plausibly smaller.

  7. Biased Tracers in Redshift Space in the EFT of Large-Scale Structure

    Energy Technology Data Exchange (ETDEWEB)

    Perko, Ashley [Stanford U., Phys. Dept.; Senatore, Leonardo [KIPAC, Menlo Park; Jennings, Elise [Chicago U., KICP; Wechsler, Risa H. [Stanford U., Phys. Dept.

    2016-10-28

    The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a novel formalism that is able to accurately predict the clustering of large-scale structure (LSS) in the mildly non-linear regime. Here we provide the first computation of the power spectrum of biased tracers in redshift space at one loop order, and we make the associated code publicly available. We compare the multipoles $\\ell=0,2$ of the redshift-space halo power spectrum, together with the real-space matter and halo power spectra, with data from numerical simulations at $z=0.67$. For the samples we compare to, which have a number density of $\\bar n=3.8 \\cdot 10^{-2}(h \\ {\\rm Mpc}^{-1})^3$ and $\\bar n=3.9 \\cdot 10^{-4}(h \\ {\\rm Mpc}^{-1})^3$, we find that the calculation at one-loop order matches numerical measurements to within a few percent up to $k\\simeq 0.43 \\ h \\ {\\rm Mpc}^{-1}$, a significant improvement with respect to former techniques. By performing the so-called IR-resummation, we find that the Baryon Acoustic Oscillation peak is accurately reproduced. Based on the results presented here, long-wavelength statistics that are routinely observed in LSS surveys can be finally computed in the EFTofLSS. This formalism thus is ready to start to be compared directly to observational data.

  8. Towards a Gravity Dual for the Large Scale Structure of the Universe

    CERN Document Server

    Kehagias, A.

    2016-01-01

    The dynamics of the large-scale structure of the universe enjoys at all scales, even in the highly non-linear regime, a Lifshitz symmetry during the matter-dominated period. In this paper we propose a general class of six-dimensional spacetimes which could be a gravity dual to the four-dimensional large-scale structure of the universe. In this set-up, the Lifshitz symmetry manifests itself as an isometry in the bulk and our universe is a four-dimensional brane moving in such six-dimensional bulk. After finding the correspondence between the bulk and the brane dynamical Lifshitz exponents, we find the intriguing result that the preferred value of the dynamical Lifshitz exponent of our observed universe, at both linear and non-linear scales, corresponds to a fixed point of the RGE flow of the dynamical Lifshitz exponent in the dual system where the symmetry is enhanced to the Schrodinger group containing a non-relativistic conformal symmetry. We also investigate the RGE flow between fixed points of the Lifshitz...

  9. Testing of Large-Scale ICV Glasses with Hanford LAW Simulant

    Energy Technology Data Exchange (ETDEWEB)

    Hrma, Pavel R.; Kim, Dong-Sang; Vienna, John D.; Matyas, Josef; Smith, Donald E.; Schweiger, Michael J.; Yeager, John D.

    2005-03-01

    Preliminary glass compositions for immobilizing Hanford low-activity waste (LAW) by the in-container vitrification (ICV) process were initially fabricated at crucible- and engineering-scale, including simulants and actual (radioactive) LAW. Glasses were characterized for vapor hydration test (VHT) and product consistency test (PCT) responses and crystallinity (both quenched and slow-cooled samples). Selected glasses were tested for toxicity characteristic leach procedure (TCLP) responses, viscosity, and electrical conductivity. This testing showed that glasses with LAW loading of 20 mass% can be made readily and meet all product constraints by a far margin. Glasses with over 22 mass% Na2O can be made to meet all other product quality and process constraints. Large-scale testing was performed at the AMEC, Geomelt Division facility in Richland. Three tests were conducted using simulated LAW with increasing loadings of 12, 17, and 20 mass% Na2O. Glass samples were taken from the test products in a manner to represent the full expected range of product performance. These samples were characterized for composition, density, crystalline and non-crystalline phase assemblage, and durability using the VHT, PCT, and TCLP tests. The results, presented in this report, show that the AMEC ICV product with meets all waste form requirements with a large margin. These results provide strong evidence that the Hanford LAW can be successfully vitrified by the ICV technology and can meet all the constraints related to product quality. The economic feasibility of the ICV technology can be further enhanced by subsequent optimization.

  10. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    Science.gov (United States)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  11. Contributions to large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    Badescu, E.; Caprini, M.

    2003-01-01

    : once all processes are started and the controllers are in the Initial state, go to Running state; Luke warm stop: reverse of the luke warm start phase; Warm start: once all processes are alive and all controllers are the Configured state, go to the Running state; Warm stop: Reverse of the warm start phase. It was shown that the online system is capable of running on 111 PCs controlling a 3 or 4 level hierarchy of up to 111 run controllers. Furthermore, parallel partitions with a 2 level hierarchy of 11 run controllers were run successfully demonstrating the principle of partition independence. The set of incremental configurations was run sequentially to study the system behaviour with increasing numbers of controllers and PCs. Aspects of inter-operability and correct system behaviour for a large scale was verified with the partition containing 111 controllers which represent more than a factor 10 in size compared to its current use in test beam. In order to start studies of the online system for the next order of magnitude, the 4-level super partitions with 300 and 1000 crate controllers were exercised. Limits were found on the level of communication and state transition coordination which will be investigated further. (authors)

  12. Hydrologic test plans for large-scale, multiple-well tests in support of site characterization at Hanford, Washington

    International Nuclear Information System (INIS)

    Rogers, P.M.; Stone, R.; Lu, A.H.

    1985-01-01

    The Basalt Waste Isolation Project is preparing plans for tests and has begun work on some tests that will provide the data necessary for the hydrogeologic characterization of a site located on a United States government reservation at Hanford, Washington. This site is being considered for the Nation's first geologic repository of high level nuclear waste. Hydrogeologic characterization of this site requires several lines of investigation which include: surface-based small-scale tests, testing performed at depth from an exploratory shaft, geochemistry investigations, regional studies, and site-specific investigations using large-scale, multiple-well hydraulic tests. The large-scale multiple-well tests are planned for several locations in and around the site. These tests are being designed to provide estimates of hydraulic parameter values of the geologic media, chemical properties of the groundwater, and hydrogeologic boundary conditions at a scale appropriate for evaluating repository performance with respect to potential radionuclide transport

  13. Research status and needs for shear tests on large-scale reinforced concrete containment elements

    International Nuclear Information System (INIS)

    Oesterle, R.G.; Russell, H.G.

    1982-01-01

    Reinforced concrete containments at nuclear power plants are designed to resist forces caused by internal pressure, gravity, and severe earthquakes. The size, shape, and possible stress states in containments produce unique problems for design and construction. A lack of experimental data on the capacity of reinforced concrete to transfer shear stresses while subjected to biaxial tension has led to cumbersome if not impractical design criteria. Research programs recently conducted at the Construction Technology Laboratories and at Cornell University indicate that design criteria for tangential, peripheral, and radial shear are conservative. This paper discusses results from recent research and presents tentative changes for shear design provisions of the current United States code for containment structures. Areas where information is still lacking to fully verify new design provisions are discussed. Needs for further experimental research on large-scale specimens to develop economical, practical, and reliable design criteria for resisting shear forces in containment are identified. (orig.)

  14. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  15. Statistics and Dynamics in the Large-scale Structure of the Universe

    International Nuclear Information System (INIS)

    Matsubara, Takahiko

    2006-01-01

    In cosmology, observations and theories are related to each other by statistics in most cases. Especially, statistical methods play central roles in analyzing fluctuations in the universe, which are seeds of the present structure of the universe. The confrontation of the statistics and dynamics is one of the key methods to unveil the structure and evolution of the universe. I will review some of the major statistical methods in cosmology, in connection with linear and nonlinear dynamics of the large-scale structure of the universe. The present status of analyses of the observational data such as the Sloan Digital Sky Survey, and the future prospects to constrain the nature of exotic components of the universe such as the dark energy will be presented

  16. Angular momentum-large-scale structure alignments in ΛCDM models and the SDSS

    Science.gov (United States)

    Paz, Dante J.; Stasyszyn, Federico; Padilla, Nelson D.

    2008-09-01

    We study the alignments between the angular momentum of individual objects and the large-scale structure in cosmological numerical simulations and real data from the Sloan Digital Sky Survey, Data Release 6 (SDSS-DR6). To this end, we measure anisotropies in the two point cross-correlation function around simulated haloes and observed galaxies, studying separately the one- and two-halo regimes. The alignment of the angular momentum of dark-matter haloes in Λ cold dark matter (ΛCDM) simulations is found to be dependent on scale and halo mass. At large distances (two-halo regime), the spins of high-mass haloes are preferentially oriented in the direction perpendicular to the distribution of matter; lower mass systems show a weaker trend that may even reverse to show an angular momentum in the plane of the matter distribution. In the one-halo term regime, the angular momentum is aligned in the direction perpendicular to the matter distribution; the effect is stronger than for the one-halo term and increases for higher mass systems. On the observational side, we focus our study on galaxies in the SDSS-DR6 with elongated apparent shapes, and study alignments with respect to the major semi-axis. We study five samples of edge-on galaxies; the full SDSS-DR6 edge-on sample, bright galaxies, faint galaxies, red galaxies and blue galaxies (the latter two consisting mainly of ellipticals and spirals, respectively). Using the two-halo term of the projected correlation function, we find an excess of structure in the direction of the major semi-axis for all samples; the red sample shows the highest alignment (2.7 +/- 0.8per cent) and indicates that the angular momentum of flattened spheroidals tends to be perpendicular to the large-scale structure. These results are in qualitative agreement with the numerical simulation results indicating that the angular momentum of galaxies could be built up as in the Tidal Torque scenario. The one-halo term only shows a significant alignment

  17. Incorporating Direct Rapid Immunohistochemical Testing into Large-Scale Wildlife Rabies Surveillance

    Directory of Open Access Journals (Sweden)

    Kevin Middel

    2017-06-01

    Full Text Available Following an incursion of the mid-Atlantic raccoon variant of the rabies virus into southern Ontario, Canada, in late 2015, the direct rapid immunohistochemical test for rabies (dRIT was employed on a large scale to establish the outbreak perimeter and to diagnose specific cases to inform rabies control management actions. In a 17-month period, 5800 wildlife carcasses were tested using the dRIT, of which 307 were identified as rabid. When compared with the gold standard fluorescent antibody test (FAT, the dRIT was found to have a sensitivity of 100% and a specificity of 98.2%. Positive and negative test agreement was shown to be 98.3% and 99.1%, respectively, with an overall test agreement of 98.8%. The average cost to test a sample was $3.13 CAD for materials, and hands-on technical time to complete the test is estimated at 0.55 h. The dRIT procedure was found to be accurate, fast, inexpensive, easy to learn and perform, and an excellent tool for monitoring the progression of a wildlife rabies incursion.

  18. Investigations on efficiency of the emergency cooling by means of large-scale tests

    International Nuclear Information System (INIS)

    Hicken, E.F.

    1982-01-01

    The RSK guidelines contain the maximum permissible loads (max. cladding tube temperature 1200 0 C, max. Zr/H 2 O-reaction of 1% Zr). Their observance implies that only a small number of fuel rods fail. The safety research has to produce the evidence that the limiting loads are not exceeded. The analytical investigations on the emergency cooling behaviour could so far only be verified in scaled-down test facilities. After about 100 tests in four different large-scale test facilities the experimental investigations on the blow-down phase for large cracks are finished in the main. With the refill- and flood process the systems behaviour in scaled down test stands, the multidimensional conditions in the reactor pressure vessel can, however, only be simulated on the original scale. More experiments are planned as part of the 2D/3D-project (CCTF , SCTF, UPTF) and as part of the PKL-tests, so that more than 200 tests in seven plants will be available then. As to the small cracks the physical phenomena are known. The current investigations are used to increase the reliability of statement. After their being finished approximately 300 tests in seven plants will be available. (orig./HP) [de

  19. Real-time graphic display system for ROSA-V Large Scale Test Facility

    International Nuclear Information System (INIS)

    Kondo, Masaya; Anoda, Yoshinari; Osaki, Hideki; Kukita, Yutaka; Takigawa, Yoshio.

    1993-11-01

    A real-time graphic display system was developed for the ROSA-V Large Scale Test Facility (LSTF) experiments simulating accident management measures for prevention of severe core damage in pressurized water reactors (PWRs). The system works on an IBM workstation (Power Station RS/6000 model 560) and accommodates 512 channels out of about 2500 total measurements in the LSTF. It has three major functions: (a) displaying the coolant inventory distribution in the facility primary and secondary systems; (b) displaying the measured quantities at desired locations in the facility; and (c) displaying the time histories of measured quantities. The coolant inventory distribution is derived from differential pressure measurements along vertical sections and gamma-ray densitometer measurements for horizontal legs. The color display indicates liquid subcooling calculated from pressure and temperature at individual locations. (author)

  20. Microfluidic very large-scale integration for biochips: Technology, testing and fault-tolerant design

    DEFF Research Database (Denmark)

    Araci, Ismail Emre; Pop, Paul; Chakrabarty, Krishnendu

    2015-01-01

    of this paper is on continuous-flow biochips, where the basic building block is a microvalve. By combining these microvalves, more complex units such as mixers, switches, multiplexers can be built, hence the name of the technology, “microfluidic Very Large-Scale Integration” (mVLSI). A roadblock......Microfluidic biochips are replacing the conventional biochemical analyzers by integrating all the necessary functions for biochemical analysis using microfluidics. Biochips are used in many application areas, such as, in vitro diagnostics, drug discovery, biotech and ecology. The focus...... presents the state-of-the-art in the mVLSI platforms and emerging research challenges in the area of continuous-flow microfluidics, focusing on testing techniques and fault-tolerant design....

  1. Study on the structure and level of electricity prices for Northwest-European large-scale consumers

    International Nuclear Information System (INIS)

    2006-06-01

    The aim of the study on the title subject is to make an overview of the structure and developments of electricity prices for large-scale consumers in Northwest-Europe (Netherlands, Germany, Belgium and France) and of current regulations for large-scale consumers in Europe [nl

  2. The large scale in-situ PRACLAY heater and seal tests in URL HADES, Mol, Belgium

    Energy Technology Data Exchange (ETDEWEB)

    Xiangling Li; Guangjing Chen; Verstricht, Jan; Van Marcke, Philippe; Troullinos, Ioannis [ESV EURIDICE, Mol (Belgium)

    2013-07-01

    In Belgium, the URL HADES was constructed in the Boom Clay formation at the Mol site to investigate the feasibility of geological disposal in a clay formation. Since 1995, the URL R and D programme has focused on large scale demonstration tests like the PRACLAY Heater and Seal tests. The main objective of the Heater Test is to demonstrate that the thermal load generated by the heat-emitting waste will not jeopardise the safety functions of the host rock. The primary objective of the Seal Test is to provide suitable hydraulic boundary conditions for the Heater Test. The Seal Test also provides an opportunity to investigate the in-situ behaviour of a bentonite-based EBS. The PRACLAY gallery was constructed in 2007 and the hydraulic seal was installed in 2010. The bentonite is hydrated both naturally and artificially. The swelling, total pressure and pore pressure of the bentonite are continuously measured and analysed by numerical simulations to get a better understanding of this hydration processes. The timing of switching on the heater depends on the progress of the bentonite hydration, as a sufficient seal swelling is needed to fulfill its role. A set of conditions to be met for the heater switch-on and its schedule will be given. (authors)

  3. Large scale steam flow test: Pressure drop data and calculated pressure loss coefficients

    International Nuclear Information System (INIS)

    Meadows, J.B.; Spears, J.R.; Feder, A.R.; Moore, B.P.; Young, C.E.

    1993-12-01

    This report presents the result of large scale steam flow testing, 3 million to 7 million lbs/hr., conducted at approximate steam qualities of 25, 45, 70 and 100 percent (dry, saturated). It is concluded from the test data that reasonable estimates of piping component pressure loss coefficients for single phase flow in complex piping geometries can be calculated using available engineering literature. This includes the effects of nearby upstream and downstream components, compressibility, and internal obstructions, such as splitters, and ladder rungs on individual piping components. Despite expected uncertainties in the data resulting from the complexity of the piping geometry and two-phase flow, the test data support the conclusion that the predicted dry steam K-factors are accurate and provide useful insight into the effect of entrained liquid on the flow resistance. The K-factors calculated from the wet steam test data were compared to two-phase K-factors based on the Martinelli-Nelson pressure drop correlations. This comparison supports the concept of a two-phase multiplier for estimating the resistance of piping with liquid entrained into the flow. The test data in general appears to be reasonably consistent with the shape of a curve based on the Martinelli-Nelson correlation over the tested range of steam quality

  4. Effects of baryons on the statistical properties of large scale structure of the Universe

    International Nuclear Information System (INIS)

    Guillet, T.

    2010-01-01

    Observations of weak gravitational lensing will provide strong constraints on the cosmic expansion history and the growth rate of large scale structure, yielding clues to the properties and nature of dark energy. Their interpretation is impacted by baryonic physics, which are expected to modify the total matter distribution at small scales. My work has focused on determining and modeling the impact of baryons on the statistics of the large scale matter distribution in the Universe. Using numerical simulations, I have extracted the effect of baryons on the power spectrum, variance and skewness of the total density field as predicted by these simulations. I have shown that a model based on the halo model construction, featuring a concentrated central component to account for cool condensed baryons, is able to reproduce accurately, and down to very small scales, the measured amplifications of both the variance and skewness of the density field. Because of well-known issues with baryons in current cosmological simulations, I have extended the central component model to rely on as many observation-based ingredients as possible. As an application, I have studied the effect of baryons on the predictions of the upcoming Euclid weak lensing survey. During the course of this work, I have also worked at developing and extending the RAMSES code, in particular by developing a parallel self-gravity solver, which offers significant performance gains, in particular for the simulation of some astrophysical setups such as isolated galaxy or cluster simulations. (author) [fr

  5. Non-gut baryogenesis and large scale structure of the universe

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    1995-07-01

    We discuss a mechanism for generating baryon density perturbations and study the evolution of the baryon charge density distribution in the framework of the low temperature baryogenesis scenario. This mechanism may be important for the large scale structure formation of the Universe and particularly, may be essential for understanding the existence of a characteristic scale of 130h -1 Mpc in the distribution of the visible matter. The detailed analysis showed that both the observed very large scale of the visible matter distribution in the Universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, according to our model, at present the visible part of the Universe may consist of baryonic and antibaryonic shells, sufficiently separated, so that annihilation radiation is not observed. This is an interesting possibility as far as the observational data of antiparticles in cosmic rays do not rule out the possibility of antimatter superclusters in the Universe. (author). 16 refs, 3 figs

  6. Ward identities and consistency relations for the large scale structure with multiple species

    International Nuclear Information System (INIS)

    Peloso, Marco; Pietroni, Massimo

    2014-01-01

    We present fully nonlinear consistency relations for the squeezed bispectrum of Large Scale Structure. These relations hold when the matter component of the Universe is composed of one or more species, and generalize those obtained in [1,2] in the single species case. The multi-species relations apply to the standard dark matter + baryons scenario, as well as to the case in which some of the fields are auxiliary quantities describing a particular population, such as dark matter halos or a specific galaxy class. If a large scale velocity bias exists between the different populations new terms appear in the consistency relations with respect to the single species case. As an illustration, we discuss two physical cases in which such a velocity bias can exist: (1) a new long range scalar force in the dark matter sector (resulting in a violation of the equivalence principle in the dark matter-baryon system), and (2) the distribution of dark matter halos relative to that of the underlying dark matter field

  7. The existence of very large-scale structures in the universe

    Energy Technology Data Exchange (ETDEWEB)

    Goicoechea, L J; Martin-Mirones, J M [Universidad de Cantabria Santander, (ES)

    1989-09-01

    Assuming that the dipole moment observed in the cosmic background radiation (microwaves and X-rays) can be interpreted as a consequence of the motion of the observer toward a non-local and very large-scale structure in our universe, we study the perturbation of the m-z relation by this inhomogeneity, the dynamical contribution of sources to the dipole anisotropy in the X-ray background and the imprint that several structures with such characteristics would have had on the microwave background at the decoupling. We conclude that in this model the observed anisotropy in the microwave background on intermediate angular scales ({approx}10{sup 0}) may be in conflict with the existence of superstructures.

  8. On a digital wireless impact-monitoring network for large-scale composite structures

    International Nuclear Information System (INIS)

    Yuan, Shenfang; Mei, Hanfei; Qiu, Lei; Ren, Yuanqiang

    2014-01-01

    Impact, which may occur during manufacture, service or maintenance, is one of the major concerns to be monitored throughout the lifetime of aircraft composite structures. Aiming at monitoring impacts online while minimizing the weight added to the aircraft to meet the strict limitations of aerospace engineering, this paper puts forward a new digital wireless network based on miniaturized wireless digital impact-monitoring nodes developed for large-scale composite structures. In addition to investigations on the design methods of the network architecture, time synchronization and implementation method, a conflict resolution method based on the feature parameters of digital sequences is first presented to address impact localization conflicts when several nodes are arranged close together. To verify the feasibility and stability of the wireless network, experiments are performed on a complex aircraft composite wing box and an unmanned aerial vehicle (UAV) composite wing. Experimental results show the successful design of the presented network. (paper)

  9. Planck 2013 results. XVII. Gravitational lensing by large-scale structure

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Basak, S.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Dechelette, T.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Ho, S.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Jaffe, T.R.; Jaffe, A.H.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Laureijs, R.J.; Lavabre, A.; Lawrence, C.R.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Lewis, A.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Pullen, A.R.; Rachen, J.P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-01-01

    On the arcminute angular scales probed by Planck, the CMB anisotropies are gently perturbed by gravitational lensing. Here we present a detailed study of this effect, detecting lensing independently in the 100, 143, and 217GHz frequency bands with an overall significance of greater than 25sigma. We use the temperature-gradient correlations induced by lensing to reconstruct a (noisy) map of the CMB lensing potential, which provides an integrated measure of the mass distribution back to the CMB last-scattering surface. Our lensing potential map is significantly correlated with other tracers of mass, a fact which we demonstrate using several representative tracers of large-scale structure. We estimate the power spectrum of the lensing potential, finding generally good agreement with expectations from the best-fitting LCDM model for the Planck temperature power spectrum, showing that this measurement at z=1100 correctly predicts the properties of the lower-redshift, later-time structures which source the lensing ...

  10. The cosmic large-scale structure, dark matter and the origin of galaxies

    CERN Document Server

    Frenk, Carlos S

    1998-01-01

    In this series of lectures, I will review the main events and processes which are thought to have led to the build of structure in the Universe. First, I will provide an overview of some basic ideas such as inflation, Big Bang nucleosynthesis, the microwave background radiation and gravitanional instability. I will then dicuss the evidence for dark matter in the universe and current ideas on the nature and amount of this dark matter, including their consequences for the values of the fundamental cosmological parameters. Next, I will review the processes that give rise to the cosmic large-scale structure, starting with a discussion of the main fluctuation damping mechanisms at early times and finishing with a description of the non-linear phases of evolution. I will discuss how these calculations compare with observations and present the current status of competing cosmological models. Finally I will summarize the most recent and very exciting developments in observational and theoretical studies of gala...

  11. A numerical formulation and algorithm for limit and shakedown analysis of large-scale elastoplastic structures

    Science.gov (United States)

    Peng, Heng; Liu, Yinghua; Chen, Haofeng

    2018-05-01

    In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.

  12. Model abstraction addressing long-term simulations of chemical degradation of large-scale concrete structures

    International Nuclear Information System (INIS)

    Jacques, D.; Perko, J.; Seetharam, S.; Mallants, D.

    2012-01-01

    This paper presents a methodology to assess the spatial-temporal evolution of chemical degradation fronts in real-size concrete structures typical of a near-surface radioactive waste disposal facility. The methodology consists of the abstraction of a so-called full (complicated) model accounting for the multicomponent - multi-scale nature of concrete to an abstracted (simplified) model which simulates chemical concrete degradation based on a single component in the aqueous and solid phase. The abstracted model is verified against chemical degradation fronts simulated with the full model under both diffusive and advective transport conditions. Implementation in the multi-physics simulation tool COMSOL allows simulation of the spatial-temporal evolution of chemical degradation fronts in large-scale concrete structures. (authors)

  13. Large scale gas injection test (Lasgit) performed at the Aespoe Hard Rock Laboratory. Summary report 2008

    International Nuclear Information System (INIS)

    Cuss, R.J.; Harrington, J.F.; Noy, D.J.

    2010-02-01

    This report describes the set-up, operation and observations from the first 1,385 days (3.8 years) of the large scale gas injection test (Lasgit) experiment conducted at the Aespoe Hard Rock Laboratory. During this time the bentonite buffer has been artificially hydrated and has given new insight into the evolution of the buffer. After 2 years (849 days) of artificial hydration a canister filter was identified to perform a series of hydraulic and gas tests, a period that lasted 268 days. The results from the gas test showed that the full-scale bentonite buffer behaved in a similar way to previous laboratory experiments. This confirms the up-scaling of laboratory observations with the addition of considerable information on the stress responses throughout the deposition hole. During the gas testing stage, the buffer was continued to artificially hydrate. Hydraulic results, from controlled and uncontrolled events, show that the buffer continues to mature and has yet to reach full maturation. Lasgit has yielded high quality data relating to the hydration of the bentonite and the evolution in hydrogeological properties adjacent to the deposition hole. The initial hydraulic and gas injection tests confirm the correct working of all control and data acquisition systems. Lasgit has been in successful operation for in excess of 1,385 days

  14. Large scale gas injection test (Lasgit) performed at the Aespoe Hard Rock Laboratory. Summary report 2008

    Energy Technology Data Exchange (ETDEWEB)

    Cuss, R.J.; Harrington, J.F.; Noy, D.J. (British Geological Survey (United Kingdom))

    2010-02-15

    This report describes the set-up, operation and observations from the first 1,385 days (3.8 years) of the large scale gas injection test (Lasgit) experiment conducted at the Aespoe Hard Rock Laboratory. During this time the bentonite buffer has been artificially hydrated and has given new insight into the evolution of the buffer. After 2 years (849 days) of artificial hydration a canister filter was identified to perform a series of hydraulic and gas tests, a period that lasted 268 days. The results from the gas test showed that the full-scale bentonite buffer behaved in a similar way to previous laboratory experiments. This confirms the up-scaling of laboratory observations with the addition of considerable information on the stress responses throughout the deposition hole. During the gas testing stage, the buffer was continued to artificially hydrate. Hydraulic results, from controlled and uncontrolled events, show that the buffer continues to mature and has yet to reach full maturation. Lasgit has yielded high quality data relating to the hydration of the bentonite and the evolution in hydrogeological properties adjacent to the deposition hole. The initial hydraulic and gas injection tests confirm the correct working of all control and data acquisition systems. Lasgit has been in successful operation for in excess of 1,385 days

  15. Large scale testing of nitinol shape memory alloy devices for retrofitting of bridges

    International Nuclear Information System (INIS)

    Johnson, Rita; Emmanuel Maragakis, M; Saiid Saiidi, M; Padgett, Jamie E; DesRoches, Reginald

    2008-01-01

    A large scale testing program was conducted to determine the effects of shape memory alloy (SMA) restrainer cables on the seismic performance of in-span hinges of a representative multiple-frame concrete box girder bridge subjected to earthquake excitations. Another objective of the study was to compare the performance of SMA restrainers to that of traditional steel restrainers as restraining devices for reducing hinge displacement and the likelihood of collapse during earthquakes. The results of the tests show that SMA restrainers performed very well as restraining devices. The forces in the SMA and steel restrainers were comparable. However, the SMA restrainer cables had minimal residual strain after repeated loading and exhibited the ability to undergo many cycles with little strength and stiffness degradation. In addition, the hysteretic damping that was observed in the larger ground accelerations demonstrated the ability of the materials to dissipate energy. An analytical study was conducted to assess the anticipated seismic response of the test setup and evaluate the accuracy of the analytical model. The results of the analytical simulation illustrate that the analytical model was able to match the responses from the experimental tests, including peak stresses, strains, forces, and hinge openings

  16. The impact of large scale ionospheric structure on radio occultation retrievals

    Directory of Open Access Journals (Sweden)

    A. J. Mannucci

    2011-12-01

    Full Text Available We study the impact of large-scale ionospheric structure on the accuracy of radio occultation (RO retrievals. We use a climatological model of the ionosphere as well as an ionospheric data assimilation model to compare quiet and geomagnetically disturbed conditions. The presence of ionospheric electron density gradients during disturbed conditions increases the physical separation of the two GPS frequencies as the GPS signal traverses the ionosphere and atmosphere. We analyze this effect in detail using ray-tracing and a full geophysical retrieval system. During quiet conditions, our results are similar to previously published studies. The impact of a major ionospheric storm is analyzed using data from the 30 October 2003 "Halloween" superstorm period. At 40 km altitude, the refractivity bias under disturbed conditions is approximately three times larger than quiet time. These results suggest the need for ionospheric monitoring as part of an RO-based climate observation strategy. We find that even during quiet conditions, the magnitude of retrieval bias depends critically on assumed ionospheric electron density structure, which may explain variations in previously published bias estimates that use a variety of assumptions regarding large scale ionospheric structure. We quantify the impact of spacecraft orbit altitude on the magnitude of bending angle and retrieval error. Satellites in higher altitude orbits (700+ km tend to have lower residual biases due to the tendency of the residual bending to cancel between the top and bottomside ionosphere. Another factor affecting accuracy is the commonly-used assumption that refractive index is unity at the receiver. We conclude with remarks on the implications of this study for long-term climate monitoring using RO.

  17. An algebraic sub-structuring method for large-scale eigenvalue calculation

    International Nuclear Information System (INIS)

    Yang, C.; Gao, W.; Bai, Z.; Li, X.; Lee, L.; Husbands, P.; Ng, E.

    2004-01-01

    We examine sub-structuring methods for solving large-scale generalized eigenvalue problems from a purely algebraic point of view. We use the term 'algebraic sub-structuring' to refer to the process of applying matrix reordering and partitioning algorithms to divide a large sparse matrix into smaller submatrices from which a subset of spectral components are extracted and combined to provide approximate solutions to the original problem. We are interested in the question of which spectral components one should extract from each sub-structure in order to produce an approximate solution to the original problem with a desired level of accuracy. Error estimate for the approximation to the smallest eigenpair is developed. The estimate leads to a simple heuristic for choosing spectral components (modes) from each sub-structure. The effectiveness of such a heuristic is demonstrated with numerical examples. We show that algebraic sub-structuring can be effectively used to solve a generalized eigenvalue problem arising from the simulation of an accelerator structure. One interesting characteristic of this application is that the stiffness matrix produced by a hierarchical vector finite elements scheme contains a null space of large dimension. We present an efficient scheme to deflate this null space in the algebraic sub-structuring process

  18. The topology of large-scale structure. VI - Slices of the universe

    Science.gov (United States)

    Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.

    1992-01-01

    Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.

  19. Large-scale structure after COBE: Peculiar velocities and correlations of cold dark matter halos

    Science.gov (United States)

    Zurek, Wojciech H.; Quinn, Peter J.; Salmon, John K.; Warren, Michael S.

    1994-01-01

    Large N-body simulations on parallel supercomputers allow one to simultaneously investigate large-scale structure and the formation of galactic halos with unprecedented resolution. Our study shows that the masses as well as the spatial distribution of halos on scales of tens of megaparsecs in a cold dark matter (CDM) universe with the spectrum normalized to the anisotropies detected by Cosmic Background Explorer (COBE) is compatible with the observations. We also show that the average value of the relative pairwise velocity dispersion sigma(sub v) - used as a principal argument against COBE-normalized CDM models-is significantly lower for halos than for individual particles. When the observational methods of extracting sigma(sub v) are applied to the redshift catalogs obtained from the numerical experiments, estimates differ significantly between different observation-sized samples and overlap observational estimates obtained following the same procedure.

  20. Time-Sliced Perturbation Theory for Large Scale Structure I: General Formalism

    CERN Document Server

    Blas, Diego; Ivanov, Mikhail M.; Sibiryakov, Sergey

    2016-01-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein--de Sitter universe, the time evolution of the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This pave...

  1. Measuring the topology of large-scale structure in the universe

    Science.gov (United States)

    Gott, J. Richard, III

    1988-11-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  2. The topology of large-scale structure. VI - Slices of the universe

    Science.gov (United States)

    Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.

    1992-03-01

    Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.

  3. Measuring the topology of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Gott, J.R. III

    1988-01-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data. 45 references

  4. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  5. Renormalization-group flow of the effective action of cosmological large-scale structures

    CERN Document Server

    Floerchinger, Stefan

    2017-01-01

    Following an approach of Matarrese and Pietroni, we derive the functional renormalization group (RG) flow of the effective action of cosmological large-scale structures. Perturbative solutions of this RG flow equation are shown to be consistent with standard cosmological perturbation theory. Non-perturbative approximate solutions can be obtained by truncating the a priori infinite set of possible effective actions to a finite subspace. Using for the truncated effective action a form dictated by dissipative fluid dynamics, we derive RG flow equations for the scale dependence of the effective viscosity and sound velocity of non-interacting dark matter, and we solve them numerically. Physically, the effective viscosity and sound velocity account for the interactions of long-wavelength fluctuations with the spectrum of smaller-scale perturbations. We find that the RG flow exhibits an attractor behaviour in the IR that significantly reduces the dependence of the effective viscosity and sound velocity on the input ...

  6. Cosmological large-scale structures beyond linear theory in modified gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bernardeau, Francis; Brax, Philippe, E-mail: francis.bernardeau@cea.fr, E-mail: philippe.brax@cea.fr [CEA, Institut de Physique Théorique, 91191 Gif-sur-Yvette Cédex (France)

    2011-06-01

    We consider the effect of modified gravity on the growth of large-scale structures at second order in perturbation theory. We show that modified gravity models changing the linear growth rate of fluctuations are also bound to change, although mildly, the mode coupling amplitude in the density and reduced velocity fields. We present explicit formulae which describe this effect. We then focus on models of modified gravity involving a scalar field coupled to matter, in particular chameleons and dilatons, where it is shown that there exists a transition scale around which the existence of an extra scalar degree of freedom induces significant changes in the coupling properties of the cosmic fields. We obtain the amplitude of this effect for realistic dilaton models at the tree-order level for the bispectrum, finding them to be comparable in amplitude to those obtained in the DGP and f(R) models.

  7. Large Scale Chromosome Folding Is Stable against Local Changes in Chromatin Structure.

    Directory of Open Access Journals (Sweden)

    Ana-Maria Florescu

    2016-06-01

    Full Text Available Characterizing the link between small-scale chromatin structure and large-scale chromosome folding during interphase is a prerequisite for understanding transcription. Yet, this link remains poorly investigated. Here, we introduce a simple biophysical model where interphase chromosomes are described in terms of the folding of chromatin sequences composed of alternating blocks of fibers with different thicknesses and flexibilities, and we use it to study the influence of sequence disorder on chromosome behaviors in space and time. By employing extensive computer simulations, we thus demonstrate that chromosomes undergo noticeable conformational changes only on length-scales smaller than 105 basepairs and time-scales shorter than a few seconds, and we suggest there might exist effective upper bounds to the detection of chromosome reorganization in eukaryotes. We prove the relevance of our framework by modeling recent experimental FISH data on murine chromosomes.

  8. Bounds on isocurvature perturbations from cosmic microwave background and large scale structure data.

    Science.gov (United States)

    Crotty, Patrick; García-Bellido, Juan; Lesgourgues, Julien; Riazuelo, Alain

    2003-10-24

    We obtain very stringent bounds on the possible cold dark matter, baryon, and neutrino isocurvature contributions to the primordial fluctuations in the Universe, using recent cosmic microwave background and large scale structure data. Neglecting the possible effects of spatial curvature, tensor perturbations, and reionization, we perform a Bayesian likelihood analysis with nine free parameters, and find that the amplitude of the isocurvature component cannot be larger than about 31% for the cold dark matter mode, 91% for the baryon mode, 76% for the neutrino density mode, and 60% for the neutrino velocity mode, at 2sigma, for uncorrelated models. For correlated adiabatic and isocurvature components, the fraction could be slightly larger. However, the cross-correlation coefficient is strongly constrained, and maximally correlated/anticorrelated models are disfavored. This puts strong bounds on the curvaton model.

  9. Large scale structure from the Higgs fields of the supersymmetric standard model

    International Nuclear Information System (INIS)

    Bastero-Gil, M.; Di Clemente, V.; King, S.F.

    2003-01-01

    We propose an alternative implementation of the curvaton mechanism for generating the curvature perturbations which does not rely on a late decaying scalar decoupled from inflation dynamics. In our mechanism the supersymmetric Higgs scalars are coupled to the inflaton in a hybrid inflation model, and this allows the conversion of the isocurvature perturbations of the Higgs fields to the observed curvature perturbations responsible for large scale structure to take place during reheating. We discuss an explicit model which realizes this mechanism in which the μ term in the Higgs superpotential is generated after inflation by the vacuum expectation value of a singlet field. The main prediction of the model is that the spectral index should deviate significantly from unity, vertical bar n-1 vertical bar ∼0.1. We also expect relic isocurvature perturbations in neutralinos and baryons, but no significant departures from Gaussianity and no observable effects of gravity waves in the CMB spectrum

  10. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    International Nuclear Information System (INIS)

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  11. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.

  12. Imprints of the large-scale structure on AGN formation and evolution

    Science.gov (United States)

    Porqueres, Natàlia; Jasche, Jens; Enßlin, Torsten A.; Lavaux, Guilhem

    2018-04-01

    Black hole masses are found to correlate with several global properties of their host galaxies, suggesting that black holes and galaxies have an intertwined evolution and that active galactic nuclei (AGN) have a significant impact on galaxy evolution. Since the large-scale environment can also affect AGN, this work studies how their formation and properties depend on the environment. We have used a reconstructed three-dimensional high-resolution density field obtained from a Bayesian large-scale structure reconstruction method applied to the 2M++ galaxy sample. A web-type classification relying on the shear tensor is used to identify different structures on the cosmic web, defining voids, sheets, filaments, and clusters. We confirm that the environmental density affects the AGN formation and their properties. We found that the AGN abundance is equivalent to the galaxy abundance, indicating that active and inactive galaxies reside in similar dark matter halos. However, occurrence rates are different for each spectral type and accretion rate. These differences are consistent with the AGN evolutionary sequence suggested by previous authors, Seyferts and Transition objects transforming into low-ionization nuclear emission line regions (LINERs), the weaker counterpart of Seyferts. We conclude that AGN properties depend on the environmental density more than on the web-type. More powerful starbursts and younger stellar populations are found in high densities, where interactions and mergers are more likely. AGN hosts show smaller masses in clusters for Seyferts and Transition objects, which might be due to gas stripping. In voids, the AGN population is dominated by the most massive galaxy hosts.

  13. Segmentation and fragmentation of melt jets due to generation of large-scale structures. Observation in low subcooling conditions

    International Nuclear Information System (INIS)

    Sugiyama, Ken-ichiro; Yamada, Tsuyoshi

    1999-01-01

    In order to clarify a mechanism of melt-jet breakup and fragmentation entirely different from the mechanism of stripping, a series of experiments were carried out by using molten tin jets of 100 grams with initial temperatures from 250degC to 900degC. Molten tin jets with a small kinematic viscosity and a large thermal diffusivity were used to observe breakup and fragmentation of melt jets enhanced thermally and hydrodynamically. We observed jet columns with second-stage large-scale structures generated by the coalescence of large-scale structures recognized in the field of fluid mechanics. At a greater depth, the segmentation of jet columns between second-stage large-scale structures and the fragmentation of the segmented jet columns were observed. It is reasonable to consider that the segmentation and the fragmentation of jet columns are caused by the boiling of water hydrodynamically entrained within second-stage large-scale structures. (author)

  14. A BAYESIAN ESTIMATE OF THE CMB–LARGE-SCALE STRUCTURE CROSS-CORRELATION

    Energy Technology Data Exchange (ETDEWEB)

    Moura-Santos, E. [Instituto de Física, Universidade de São Paulo, Rua do Matão trav. R 187, 05508-090, São Paulo—SP (Brazil); Carvalho, F. C. [Departamento de Física, Universidade do Estado do Rio Grande do Norte, 59610-210, Mossoró-RN (Brazil); Penna-Lima, M. [APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cité, 10, rue Alice Domon et Léonie Duquet, F-75205 Paris Cedex 13 (France); Novaes, C. P.; Wuensche, C. A., E-mail: emoura@if.usp.br, E-mail: fabiocabral@uern.br, E-mail: pennal@apc.in2p3.fr, E-mail: cawuenschel@das.inpe.br, E-mail: camilanovaes@on.br [Observatório Nacional, Rua General José Cristino 77, São Cristóvão, 20921-400, Rio de Janeiro, RJ (Brazil)

    2016-08-01

    Evidences for late-time acceleration of the universe are provided by multiple probes, such as Type Ia supernovae, the cosmic microwave background (CMB), and large-scale structure (LSS). In this work, we focus on the integrated Sachs–Wolfe (ISW) effect, i.e., secondary CMB fluctuations generated by evolving gravitational potentials due to the transition between, e.g., the matter and dark energy (DE) dominated phases. Therefore, assuming a flat universe, DE properties can be inferred from ISW detections. We present a Bayesian approach to compute the CMB–LSS cross-correlation signal. The method is based on the estimate of the likelihood for measuring a combined set consisting of a CMB temperature and galaxy contrast maps, provided that we have some information on the statistical properties of the fluctuations affecting these maps. The likelihood is estimated by a sampling algorithm, therefore avoiding the computationally demanding techniques of direct evaluation in either pixel or harmonic space. As local tracers of the matter distribution at large scales, we used the Two Micron All Sky Survey galaxy catalog and, for the CMB temperature fluctuations, the ninth-year data release of the Wilkinson Microwave Anisotropy Probe ( WMAP 9). The results show a dominance of cosmic variance over the weak recovered signal, due mainly to the shallowness of the catalog used, with systematics associated with the sampling algorithm playing a secondary role as sources of uncertainty. When combined with other complementary probes, the method presented in this paper is expected to be a useful tool to late-time acceleration studies in cosmology.

  15. A large-scale soil-structure interaction experiment: Design and construction

    International Nuclear Information System (INIS)

    Tang, H.T.; Tang, Y.K.; Stepp, J.C.; Wall, I.B.; Lin, E.; Cheng, S.C.; Lee, S.K.

    1989-01-01

    This paper describes the design and construction phase of the Large-Scale Soil-Structure Interaction Experiment project jointly sponsored by EPRI and Taipower. The project has two objectives: 1. to obtain an earthquake database which can be used to substantiate soil-structure interaction (SSI) models and analysis methods; and 2. to quantify nuclear power plant reactor containment and internal components seismic margin based on earthquake experience data. These objectives were accomplished by recording and analyzing data from two instrumented, scaled down, reinforced concrete containment structures during seismic events. The two model structures are sited in a high seismic region in Taiwan (SMART-1). A strong-motion seismic array network is located at the site. The containment models (1/4- and 1/12-scale) were constructed and instrumented specially for this experiment. Construction was completed and data recording began in September 1985. By November 1986, 18 strong motion earthquakes ranging from Richter magnitude 4.5 to 7.0 were recorded. (orig./HP)

  16. A large-scale soil-structure interaction experiment: Part I design and construction

    International Nuclear Information System (INIS)

    Tang, H.T.; Tang, Y.K.; Wall, I.B.; Lin, E.

    1987-01-01

    In the simulated earthquake experiments (SIMQUAKE) sponsored by EPRI, the detonation of vertical arrays of explosives propagated wave motions through the ground to the model structures. Although such a simulation can provide information about dynamic soil-structure interaction (SSI) characteristics in a strong motion environment, it lacks seismic wave scattering characteristics for studying seismic input to the soil-structure system and the effect of different kinds of wave composition to the soil-structure response. To supplement the inadequacy of the simulated earthquake SSI experiment, the Electric Power Research Institute (EPRI) and the Taiwan Power Company (Taipower) jointly sponsored a large scale SSI experiment in the field. The objectives of the experiment are: (1) to obtain actual strong motion earthquakes induced database in a soft-soil environment which will substantiate predictive and design SSI models;and (2) to assess nuclear power plant reactor containment internal components dynamic response and margins relating to actual earthquake-induced excitation. These objectives are accomplished by recording and analyzing data from two instrumented, scaled down, (1/4- and 1/12-scale) reinforced concrete containments sited in a high seismic region in Taiwan where a strong-motion seismic array network is located

  17. Kinematic morphology of large-scale structure: evolution from potential to rotational flow

    International Nuclear Information System (INIS)

    Wang, Xin; Szalay, Alex; Aragón-Calvo, Miguel A.; Neyrinck, Mark C.; Eyink, Gregory L.

    2014-01-01

    As an alternative way to describe the cosmological velocity field, we discuss the evolution of rotational invariants constructed from the velocity gradient tensor. Compared with the traditional divergence-vorticity decomposition, these invariants, defined as coefficients of the characteristic equation of the velocity gradient tensor, enable a complete classification of all possible flow patterns in the dark-matter comoving frame, including both potential and vortical flows. We show that this tool, first introduced in turbulence two decades ago, is very useful for understanding the evolution of the cosmic web structure, and in classifying its morphology. Before shell crossing, different categories of potential flow are highly associated with the cosmic web structure because of the coherent evolution of density and velocity. This correspondence is even preserved at some level when vorticity is generated after shell crossing. The evolution from the potential to vortical flow can be traced continuously by these invariants. With the help of this tool, we show that the vorticity is generated in a particular way that is highly correlated with the large-scale structure. This includes a distinct spatial distribution and different types of alignment between the cosmic web and vorticity direction for various vortical flows. Incorporating shell crossing into closed dynamical systems is highly non-trivial, but we propose a possible statistical explanation for some of the phenomena relating to the internal structure of the three-dimensional invariant space.

  18. Kinematic morphology of large-scale structure: evolution from potential to rotational flow

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xin; Szalay, Alex; Aragón-Calvo, Miguel A.; Neyrinck, Mark C.; Eyink, Gregory L. [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States)

    2014-09-20

    As an alternative way to describe the cosmological velocity field, we discuss the evolution of rotational invariants constructed from the velocity gradient tensor. Compared with the traditional divergence-vorticity decomposition, these invariants, defined as coefficients of the characteristic equation of the velocity gradient tensor, enable a complete classification of all possible flow patterns in the dark-matter comoving frame, including both potential and vortical flows. We show that this tool, first introduced in turbulence two decades ago, is very useful for understanding the evolution of the cosmic web structure, and in classifying its morphology. Before shell crossing, different categories of potential flow are highly associated with the cosmic web structure because of the coherent evolution of density and velocity. This correspondence is even preserved at some level when vorticity is generated after shell crossing. The evolution from the potential to vortical flow can be traced continuously by these invariants. With the help of this tool, we show that the vorticity is generated in a particular way that is highly correlated with the large-scale structure. This includes a distinct spatial distribution and different types of alignment between the cosmic web and vorticity direction for various vortical flows. Incorporating shell crossing into closed dynamical systems is highly non-trivial, but we propose a possible statistical explanation for some of the phenomena relating to the internal structure of the three-dimensional invariant space.

  19. SULTAN test facility for large-scale vessel coolability in natural convection at low pressure

    International Nuclear Information System (INIS)

    Rouge, S.

    1997-01-01

    The SULTAN facility (France/CEA/CENG) was designed to study large-scale structure coolability by water in boiling natural convection. The objectives are to measure the main characteristics of two-dimensional, two-phase flow, in order to evaluate the recirculation mass flow in large systems, and the limits of the critical heat flux (CHF) for a wide range of thermo-hydraulic (pressure, 0.1-0.5 MPa; inlet temperature, 50-150 C; mass flow velocity, 5-4400 kg s -1 m -2 ; flux, 100-1000 kW m -2 ) and geometric (gap, 3-15 cm; inclination, 0-90 ) parameters. This paper makes available the experimental data obtained during the first two campaigns (90 , 3 cm; 10 , 15 cm): pressure drop differential pressure (DP) = f(G), CHF limits, local profiles of temperature and void fraction in the gap, visualizations. Other campaigns should confirm these first results, indicating a favourable possibility of the coolability of large surfaces under natural convection. (orig.)

  20. The large-scale vented combustion test facility at AECL-WL: description and preliminary test results

    International Nuclear Information System (INIS)

    Loesel Sitar, J.; Koroll, G.W.; Dewit, W.A.; Bowles, E.M.; Harding, J.; Sabanski, C.L.; Kumar, R.K.

    1997-01-01

    Implementation of hydrogen mitigation systems in nuclear reactor containments requires testing the effectiveness of the mitigation system, reliability and availability of the hardware, potential consequences of its use and the technical basis for hardware placement, on a meaningful scale. Similarly, the development and validation of containment codes used in nuclear reactor safety analysis require detailed combustion data from medium- and large-scale facilities. A Large-Scale Combustion Test Facility measuring 10 m x 4 m x 3 m (volume, 120 m 3 ) has been constructed and commissioned at Whiteshell Laboratories to perform a wide variety of combustion experiments. The facility is designed to be versatile so that many geometrical configurations can be achieved. The facility incorporates extensive capabilities for instrumentation and high speed data acquisition, on-line gas sampling and analysis. Other features of the facility include operation at elevated temperatures up to 150 degrees C, easy access to the interior, and remote operation. Initial thermodynamic conditions in the facility can be controlled to within 0.1 vol% of constituent gases. The first series of experiments examined vented combustion in the full 120 m 3 -volume configuration with vent areas in the range of 0.56 to 2.24 m 2 . The experiments were performed at ∼27 degrees C and near-atmospheric pressures, with hydrogen concentrations in the range of 8 to 12% by volume. This paper describes the Large-Scale Vented Combustion Test Facility and preliminary results from the first series of experiments. (author)

  1. Identifiability in N-mixture models: a large-scale screening test with bird data.

    Science.gov (United States)

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  2. Large-Scale Pumping Test Recommendations for the 200-ZP-1 Operable Unit

    Energy Technology Data Exchange (ETDEWEB)

    Spane, Frank A.

    2010-09-08

    CH2M Hill Plateau Remediation Company (CHPRC) is currently assessing aquifer characterization needs to optimize pump-and-treat remedial strategies (e.g., extraction well pumping rates, pumping schedule/design) in the 200-ZP-1 operable unit (OU), and in particular for the immediate area of the 241 TX-TY Tank Farm. Specifically, CHPRC is focusing on hydrologic characterization opportunities that may be available for newly constructed and planned ZP-1 extraction wells. These new extraction wells will be used to further refine the 3-dimensional subsurface contaminant distribution within this area and will be used in concert with other existing pump-and-treat wells to remediate the existing carbon tetrachloride contaminant plume. Currently, 14 extraction wells are actively used in the Interim Record of Decision ZP-1 pump-and-treat system for the purpose of remediating the existing carbon tetrachloride contamination in groundwater within this general area. As many as 20 new extraction wells and 17 injection wells may be installed to support final pump-and-treat operations within the OU area. It should be noted that although the report specifically refers to the 200-ZP-1 OU, the large-scale test recommendations are also applicable to the adjacent 200-UP-1 OU area. This is because of the similar hydrogeologic conditions exhibited within these two adjoining OU locations.

  3. A Polar Rover for Large-Scale Scientific Surveys: Design, Implementation and Field Test Results

    Directory of Open Access Journals (Sweden)

    Yuqing He

    2015-10-01

    Full Text Available Exploration of polar regions is of great importance to scientific research. Unfortunately, due to the harsh environment, most of the regions on the Antarctic continent are still unreachable for humankind. Therefore, in 2011, the Chinese National Antarctic Research Expedition (CHINARE launched a project to design a rover to conduct large-scale scientific surveys on the Antarctic. The main challenges for the rover are twofold: one is the mobility, i.e., how to make a rover that could survive the harsh environment and safely move on the uneven, icy and snowy terrain; the other is the autonomy, in that the robot should be able to move at a relatively high speed with little or no human intervention so that it can explore a large region in a limit time interval under the communication constraints. In this paper, the corresponding techniques, especially the polar rover's design and autonomous navigation algorithms, are introduced in detail. Subsequently, an experimental report of the fields tests on the Antarctic is given to show some preliminary evaluation of the rover. Finally, experiences and existing challenging problems are summarized.

  4. Test-particle simulations of SEP propagation in IMF with large-scale fluctuations

    Science.gov (United States)

    Kelly, J.; Dalla, S.; Laitinen, T.

    2012-11-01

    The results of full-orbit test-particle simulations of SEPs propagating through an IMF which exhibits large-scale fluctuations are presented. A variety of propagation conditions are simulated - scatter-free, and scattering with mean free path, λ, of 0.3 and 2.0 AU - and the cross-field transport of SEPs is investigated. When calculating cross-field displacements the Parker spiral geometry is accounted for and the role of magnetic field expansion is taken into account. It is found that transport across the magnetic field is enhanced in the λ =0.3 AU and λ =2 AU cases, compared to the scatter-free case, with the λ =2 AU case in particular containing outlying particles that had strayed a large distance across the IMF. Outliers are catergorized by means of Chauvenet's criterion and it is found that typically between 1 and 2% of the population falls within this category. The ratio of latitudinal to longitudinal diffusion coefficient perpendicular to the magnetic field is typically 0.2, suggesting that transport in latitude is less efficient.

  5. A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures

    Science.gov (United States)

    Kaveh, A.; Ilchi Ghazaan, M.

    2018-02-01

    In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.

  6. Evaluating neighborhood structures for modeling intercity diffusion of large-scale dengue epidemics.

    Science.gov (United States)

    Wen, Tzai-Hung; Hsu, Ching-Shun; Hu, Ming-Che

    2018-05-03

    Dengue fever is a vector-borne infectious disease that is transmitted by contact between vector mosquitoes and susceptible hosts. The literature has addressed the issue on quantifying the effect of individual mobility on dengue transmission. However, there are methodological concerns in the spatial regression model configuration for examining the effect of intercity-scale human mobility on dengue diffusion. The purposes of the study are to investigate the influence of neighborhood structures on intercity epidemic progression from pre-epidemic to epidemic periods and to compare definitions of different neighborhood structures for interpreting the spread of dengue epidemics. We proposed a framework for assessing the effect of model configurations on dengue incidence in 2014 and 2015, which were the most severe outbreaks in 70 years in Taiwan. Compared with the conventional model configuration in spatial regression analysis, our proposed model used a radiation model, which reflects population flow between townships, as a spatial weight to capture the structure of human mobility. The results of our model demonstrate better model fitting performance, indicating that the structure of human mobility has better explanatory power in dengue diffusion than the geometric structure of administration boundaries and geographic distance between centroids of cities. We also identified spatial-temporal hierarchy of dengue diffusion: dengue incidence would be influenced by its immediate neighboring townships during pre-epidemic and epidemic periods, and also with more distant neighbors (based on mobility) in pre-epidemic periods. Our findings suggest that the structure of population mobility could more reasonably capture urban-to-urban interactions, which implies that the hub cities could be a "bridge" for large-scale transmission and make townships that immediately connect to hub cities more vulnerable to dengue epidemics.

  7. Challenges to self-acceleration in modified gravity from gravitational waves and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Lombriser, Lucas, E-mail: llo@roe.ac.uk; Lima, Nelson A.

    2017-02-10

    With the advent of gravitational-wave astronomy marked by the aLIGO GW150914 and GW151226 observations, a measurement of the cosmological speed of gravity will likely soon be realised. We show that a confirmation of equality to the speed of light as indicated by indirect Galactic observations will have important consequences for a very large class of alternative explanations of the late-time accelerated expansion of our Universe. It will break the dark degeneracy of self-accelerated Horndeski scalar–tensor theories in the large-scale structure that currently limits a rigorous discrimination between acceleration from modified gravity and from a cosmological constant or dark energy. Signatures of a self-acceleration must then manifest in the linear, unscreened cosmological structure. We describe the minimal modification required for self-acceleration with standard gravitational-wave speed and show that its maximum likelihood yields a 3σ poorer fit to cosmological observations compared to a cosmological constant. Hence, equality between the speeds challenges the concept of cosmic acceleration from a genuine scalar–tensor modification of gravity.

  8. The future of primordial features with large-scale structure surveys

    International Nuclear Information System (INIS)

    Chen, Xingang; Namjoo, Mohammad Hossein; Dvorkin, Cora; Huang, Zhiqi; Verde, Licia

    2016-01-01

    Primordial features are one of the most important extensions of the Standard Model of cosmology, providing a wealth of information on the primordial Universe, ranging from discrimination between inflation and alternative scenarios, new particle detection, to fine structures in the inflationary potential. We study the prospects of future large-scale structure (LSS) surveys on the detection and constraints of these features. We classify primordial feature models into several classes, and for each class we present a simple template of power spectrum that encodes the essential physics. We study how well the most ambitious LSS surveys proposed to date, including both spectroscopic and photometric surveys, will be able to improve the constraints with respect to the current Planck data. We find that these LSS surveys will significantly improve the experimental sensitivity on features signals that are oscillatory in scales, due to the 3D information. For a broad range of models, these surveys will be able to reduce the errors of the amplitudes of the features by a factor of 5 or more, including several interesting candidates identified in the recent Planck data. Therefore, LSS surveys offer an impressive opportunity for primordial feature discovery in the next decade or two. We also compare the advantages of both types of surveys.

  9. Measuring α in the early universe: CMB temperature, large-scale structure, and Fisher matrix analysis

    International Nuclear Information System (INIS)

    Martins, C. J. A. P.; Melchiorri, A.; Trotta, R.; Bean, R.; Rocha, G.; Avelino, P. P.; Viana, P. T. P.

    2002-01-01

    We extend our recent work on the effects of a time-varying fine-structure constant α in the cosmic microwave background by providing a thorough analysis of the degeneracies between α and the other cosmological parameters, and discussing ways to break these with both existing and/or forthcoming data. In particular, we present the state-of-the-art cosmic microwave background constraints on α through a combined analysis of the BOOMERanG, MAXIMA and DASI data sets. We also present a novel discussion of the constraints on α coming from large-scale structure observations, focusing in particular on the power spectrum from the 2dF survey. Our results are consistent with no variation in α from the epoch of recombination to the present day, and restrict any such variation to be less than about 4%. We show that the forthcoming Microwave Anisotropy Probe and Planck experiments will be able to break most of the currently existing degeneracies between α and other parameters, and measure α to better than percent accuracy

  10. The topology of large-scale structure. III. Analysis of observations

    International Nuclear Information System (INIS)

    Gott, J.R. III; Weinberg, D.H.; Miller, J.; Thuan, T.X.; Schneider, S.E.

    1989-01-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a meatball topology. 66 refs

  11. The topology of large-scale structure. III - Analysis of observations

    Science.gov (United States)

    Gott, J. Richard, III; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.; Weinberg, David H.; Gammie, Charles; Polk, Kevin; Vogeley, Michael; Jeffrey, Scott; Bhavsar, Suketu P.; Melott, Adrian L.; Giovanelli, Riccardo; Hayes, Martha P.; Tully, R. Brent; Hamilton, Andrew J. S.

    1989-05-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  12. The topology of large-scale structure. III - Analysis of observations. [in universe

    Science.gov (United States)

    Gott, J. Richard, III; Weinberg, David H.; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.

    1989-01-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  13. Challenges to self-acceleration in modified gravity from gravitational waves and large-scale structure

    Science.gov (United States)

    Lombriser, Lucas; Lima, Nelson A.

    2017-02-01

    With the advent of gravitational-wave astronomy marked by the aLIGO GW150914 and GW151226 observations, a measurement of the cosmological speed of gravity will likely soon be realised. We show that a confirmation of equality to the speed of light as indicated by indirect Galactic observations will have important consequences for a very large class of alternative explanations of the late-time accelerated expansion of our Universe. It will break the dark degeneracy of self-accelerated Horndeski scalar-tensor theories in the large-scale structure that currently limits a rigorous discrimination between acceleration from modified gravity and from a cosmological constant or dark energy. Signatures of a self-acceleration must then manifest in the linear, unscreened cosmological structure. We describe the minimal modification required for self-acceleration with standard gravitational-wave speed and show that its maximum likelihood yields a 3σ poorer fit to cosmological observations compared to a cosmological constant. Hence, equality between the speeds challenges the concept of cosmic acceleration from a genuine scalar-tensor modification of gravity.

  14. Large-scale structuring of a rotating plasma due to plasma macroinstabilities

    International Nuclear Information System (INIS)

    Kikuchi, Toshinori; Ikehata, Takashi; Sato, Naoyuki; Watahiki, Takeshi; Tanabe, Toshio; Mase, Hiroshi

    1995-01-01

    The formation of coherent structures during plasma macroinstabilities have been of interest in view of the nonlinear plasma physics. In the present paper, we have investigated in detail, the mechanism and specific features of large-scale structuring of a rotating plasma. In the case of weak magnetic field, the plasma ejected from a plasma gun has a high beta value (β > 1) so that it expands rapidly across the magnetic field excluding a magnetic flux from its interior. Then, the boundary between the expanding plasma and the magnetic field becomes unstable against Rayleigh-Taylor instability. This instability has the higher growth rate at the shorter wavelength and the mode appears as flute. These features of the instability are confirmed by the observation of radial plasma jets with the azimuthal mode number m=20-40 in the early time of the plasma expansion. In the case of strong magnetic field, on the other hand, the plasma little expands and rotates at two times the ion sound speed. Especially, we observe spiral jets of m=2 instead of short-wavelength radial jets. This mode appears only when a glass target is installed or a dense neutral gas is introduced around the plasma to give the plasma a frictional force. From these results and with reference to the theory of plasma instabilities, the centrifugal instability caused by a combination of the velocity shear and centrifugal force is concluded to be responsible for the formation of spiral jets. (author)

  15. A large scale analysis of information-theoretic network complexity measures using chemical structures.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.

  16. The future of primordial features with large-scale structure surveys

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingang; Namjoo, Mohammad Hossein [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Dvorkin, Cora [Department of Physics, Harvard University, Cambridge, MA 02138 (United States); Huang, Zhiqi [School of Physics and Astronomy, Sun Yat-Sen University, 135 Xingang Xi Road, Guangzhou, 510275 (China); Verde, Licia, E-mail: xingang.chen@cfa.harvard.edu, E-mail: dvorkin@physics.harvard.edu, E-mail: huangzhq25@sysu.edu.cn, E-mail: mohammad.namjoo@cfa.harvard.edu, E-mail: liciaverde@icc.ub.edu [ICREA and ICC-UB, University of Barcelona (IEEC-UB), Marti i Franques, 1, Barcelona 08028 (Spain)

    2016-11-01

    Primordial features are one of the most important extensions of the Standard Model of cosmology, providing a wealth of information on the primordial Universe, ranging from discrimination between inflation and alternative scenarios, new particle detection, to fine structures in the inflationary potential. We study the prospects of future large-scale structure (LSS) surveys on the detection and constraints of these features. We classify primordial feature models into several classes, and for each class we present a simple template of power spectrum that encodes the essential physics. We study how well the most ambitious LSS surveys proposed to date, including both spectroscopic and photometric surveys, will be able to improve the constraints with respect to the current Planck data. We find that these LSS surveys will significantly improve the experimental sensitivity on features signals that are oscillatory in scales, due to the 3D information. For a broad range of models, these surveys will be able to reduce the errors of the amplitudes of the features by a factor of 5 or more, including several interesting candidates identified in the recent Planck data. Therefore, LSS surveys offer an impressive opportunity for primordial feature discovery in the next decade or two. We also compare the advantages of both types of surveys.

  17. Large scale structures in a turbulent boundary layer and their imprint on wall shear stress

    Science.gov (United States)

    Pabon, Rommel; Barnard, Casey; Ukeiley, Lawrence; Sheplak, Mark

    2015-11-01

    Experiments were performed on a turbulent boundary layer developing on a flat plate model under zero pressure gradient flow. A MEMS differential capacitive shear stress sensor with a 1 mm × 1 mm floating element was used to capture the fluctuating wall shear stress simultaneously with streamwise velocity measurements from a hot-wire anemometer traversed in the wall normal direction. Near the wall, the peak in the cross correlation corresponds to an organized motion inclined 45° from the wall. In the outer region, the peak diminishes in value, but is still significant at a distance greater than half the boundary layer thickness, and corresponds to a structure inclined 14° from the wall. High coherence between the two signals was found for the low-frequency content, reinforcing the belief that large scale structures have a vital impact on wall shear stress. Thus, estimation of the wall shear stress from the low-frequency velocity signal will be performed, and is expected to be statistically significant in the outer boundary layer. Additionally, conditionally averaged mean velocity profiles will be presented to assess the effects of high and low shear stress. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138.

  18. Challenges to self-acceleration in modified gravity from gravitational waves and large-scale structure

    Directory of Open Access Journals (Sweden)

    Lucas Lombriser

    2017-02-01

    Full Text Available With the advent of gravitational-wave astronomy marked by the aLIGO GW150914 and GW151226 observations, a measurement of the cosmological speed of gravity will likely soon be realised. We show that a confirmation of equality to the speed of light as indicated by indirect Galactic observations will have important consequences for a very large class of alternative explanations of the late-time accelerated expansion of our Universe. It will break the dark degeneracy of self-accelerated Horndeski scalar–tensor theories in the large-scale structure that currently limits a rigorous discrimination between acceleration from modified gravity and from a cosmological constant or dark energy. Signatures of a self-acceleration must then manifest in the linear, unscreened cosmological structure. We describe the minimal modification required for self-acceleration with standard gravitational-wave speed and show that its maximum likelihood yields a 3σ poorer fit to cosmological observations compared to a cosmological constant. Hence, equality between the speeds challenges the concept of cosmic acceleration from a genuine scalar–tensor modification of gravity.

  19. On the Renormalization of the Effective Field Theory of Large Scale Structures

    OpenAIRE

    Pajer, Enrico; Zaldarriaga, Matias

    2013-01-01

    Standard perturbation theory (SPT) for large-scale matter inhomogeneities is unsatisfactory for at least three reasons: there is no clear expansion parameter since the density contrast is not small on all scales; it does not fully account for deviations at large scales from a perfect pressureless fluid induced by short-scale non-linearities; for generic initial conditions, loop corrections are UV-divergent, making predictions cutoff dependent and hence unphysical. The Effective Field Theory o...

  20. FutureGen 2.0 Oxy-combustion Large Scale Test – Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Kenison, LaVesta [URS, Pittsburgh, PA (United States); Flanigan, Thomas [URS, Pittsburgh, PA (United States); Hagerty, Gregg [URS, Pittsburgh, PA (United States); Gorrie, James [Air Liquide, Kennesaw, GA (United States); Leclerc, Mathieu [Air Liquide, Kennesaw, GA (United States); Lockwood, Frederick [Air Liquide, Kennesaw, GA (United States); Falla, Lyle [Babcock & Wilcox and Burns McDonnell, Kansas City, MO (United States); Macinnis, Jim [Babcock & Wilcox and Burns McDonnell, Kansas City, MO (United States); Fedak, Mathew [Babcock & Wilcox and Burns McDonnell, Kansas City, MO (United States); Yakle, Jeff [Babcock & Wilcox and Burns McDonnell, Kansas City, MO (United States); Williford, Mark [Futuregen Industrial Alliance, Inc., Morgan County, IL (United States); Wood, Paul [Futuregen Industrial Alliance, Inc., Morgan County, IL (United States)

    2016-04-01

    The primary objectives of the FutureGen 2.0 CO2 Oxy-Combustion Large Scale Test Project were to site, permit, design, construct, and commission, an oxy-combustion boiler, gas quality control system, air separation unit, and CO2 compression and purification unit, together with the necessary supporting and interconnection utilities. The project was to demonstrate at commercial scale (168MWe gross) the capability to cleanly produce electricity through coal combustion at a retrofitted, existing coal-fired power plant; thereby, resulting in near-zeroemissions of all commonly regulated air emissions, as well as 90% CO2 capture in steady-state operations. The project was to be fully integrated in terms of project management, capacity, capabilities, technical scope, cost, and schedule with the companion FutureGen 2.0 CO2 Pipeline and Storage Project, a separate but complementary project whose objective was to safely transport, permanently store and monitor the CO2 captured by the Oxy-combustion Power Plant Project. The FutureGen 2.0 Oxy-Combustion Large Scale Test Project successfully achieved all technical objectives inclusive of front-end-engineering and design, and advanced design required to accurately estimate and contract for the construction, commissioning, and start-up of a commercial-scale "ready to build" power plant using oxy-combustion technology, including full integration with the companion CO2 Pipeline and Storage project. Ultimately the project did not proceed to construction due to insufficient time to complete necessary EPC contract negotiations and commercial financing prior to expiration of federal co-funding, which triggered a DOE decision to closeout its participation in the project. Through the work that was completed, valuable technical, commercial, and programmatic lessons were learned. This project has significantly advanced the development of near-zero emission technology and will

  1. Large Scale Leach Test Facility: Development of equipment and methods, and comparison to MCC-1 leach tests

    International Nuclear Information System (INIS)

    Pellarin, D.J.; Bickford, D.F.

    1985-01-01

    This report describes the test equipment and methods, and documents the results of the first large-scale MCC-1 experiments in the Large Scale Leach Test Facility (LSLTF). Two experiments were performed using 1-ft-long samples sectioned from the middle of canister MS-11. The leachant used in the experiments was ultrapure deionized water - an aggressive and well characterized leachant providing high sensitivity for liquid sample analyses. All the original test plan objectives have been successfully met. Equipment and procedures have been developed for large-sample-size leach testing. The statistical reliability of the method has been determined, and ''bench mark'' data developed to relate small scale leach testing to full size waste forms. The facility is unique, and provides sampling reliability and flexibility not possible in smaller laboratory scale tests. Future use of this facility should simplify and accelerate the development of leaching models and repository specific data. The factor of less than 3 for leachability, corresponding to a 200,000/1 increase in sample volume, enhances the credibility of small scale test data which precedes this work, and supports the ability of the DWPF waste form to meet repository criteria

  2. The structure and large-scale organization of extreme cold waves over the conterminous United States

    Science.gov (United States)

    Xie, Zuowei; Black, Robert X.; Deng, Yi

    2017-12-01

    Extreme cold waves (ECWs) occurring over the conterminous United States (US) are studied through a systematic identification and documentation of their local synoptic structures, associated large-scale meteorological patterns (LMPs), and forcing mechanisms external to the US. Focusing on the boreal cool season (November-March) for 1950‒2005, a hierarchical cluster analysis identifies three ECW patterns, respectively characterized by cold surface air temperature anomalies over the upper midwest (UM), northwestern (NW), and southeastern (SE) US. Locally, ECWs are synoptically organized by anomalous high pressure and northerly flow. At larger scales, the UM LMP features a zonal dipole in the mid-tropospheric height field over North America, while the NW and SE LMPs each include a zonal wave train extending from the North Pacific across North America into the North Atlantic. The Community Climate System Model version 4 (CCSM4) in general simulates the three ECW patterns quite well and successfully reproduces the observed enhancements in the frequency of their associated LMPs. La Niña and the cool phase of the Pacific Decadal Oscillation (PDO) favor the occurrence of NW ECWs, while the warm PDO phase, low Arctic sea ice extent and high Eurasian snow cover extent (SCE) are associated with elevated SE-ECW frequency. Additionally, high Eurasian SCE is linked to increases in the occurrence likelihood of UM ECWs.

  3. Three-point phase correlations: A new measure of non-linear large-scale structure

    CERN Document Server

    Wolstenhulme, Richard; Obreschkow, Danail

    2015-01-01

    We derive an analytical expression for a novel large-scale structure observable: the line correlation function. The line correlation function, which is constructed from the three-point correlation function of the phase of the density field, is a robust statistical measure allowing the extraction of information in the non-linear and non-Gaussian regime. We show that, in perturbation theory, the line correlation is sensitive to the coupling kernel F_2, which governs the non-linear gravitational evolution of the density field. We compare our analytical expression with results from numerical simulations and find a very good agreement for separations r>20 Mpc/h. Fitting formulae for the power spectrum and the non-linear coupling kernel at small scales allow us to extend our prediction into the strongly non-linear regime. We discuss the advantages of the line correlation relative to standard statistical measures like the bispectrum. Unlike the latter, the line correlation is independent of the linear bias. Furtherm...

  4. Projection Effects of Large-scale Structures on Weak-lensing Peak Abundances

    Science.gov (United States)

    Yuan, Shuo; Liu, Xiangkun; Pan, Chuzhong; Wang, Qiao; Fan, Zuhui

    2018-04-01

    High peaks in weak lensing (WL) maps originate dominantly from the lensing effects of single massive halos. Their abundance is therefore closely related to the halo mass function and thus a powerful cosmological probe. However, besides individual massive halos, large-scale structures (LSS) along lines of sight also contribute to the peak signals. In this paper, with ray-tracing simulations, we investigate the LSS projection effects. We show that for current surveys with a large shape noise, the stochastic LSS effects are subdominant. For future WL surveys with source galaxies having a median redshift z med ∼ 1 or higher, however, they are significant. For the cosmological constraints derived from observed WL high-peak counts, severe biases can occur if the LSS effects are not taken into account properly. We extend the model of Fan et al. by incorporating the LSS projection effects into the theoretical considerations. By comparing with simulation results, we demonstrate the good performance of the improved model and its applicability in cosmological studies.

  5. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  6. Screening and large-scale expression of membrane proteins in mammalian cells for structural studies.

    Science.gov (United States)

    Goehring, April; Lee, Chia-Hsueh; Wang, Kevin H; Michel, Jennifer Carlisle; Claxton, Derek P; Baconguis, Isabelle; Althoff, Thorsten; Fischer, Suzanne; Garcia, K Christopher; Gouaux, Eric

    2014-11-01

    Structural, biochemical and biophysical studies of eukaryotic membrane proteins are often hampered by difficulties in overexpression of the candidate molecule. Baculovirus transduction of mammalian cells (BacMam), although a powerful method to heterologously express membrane proteins, can be cumbersome for screening and expression of multiple constructs. We therefore developed plasmid Eric Gouaux (pEG) BacMam, a vector optimized for use in screening assays, as well as for efficient production of baculovirus and robust expression of the target protein. In this protocol, we show how to use small-scale transient transfection and fluorescence-detection size-exclusion chromatography (FSEC) experiments using a GFP-His8-tagged candidate protein to screen for monodispersity and expression level. Once promising candidates are identified, we describe how to generate baculovirus, transduce HEK293S GnTI(-) (N-acetylglucosaminyltransferase I-negative) cells in suspension culture and overexpress the candidate protein. We have used these methods to prepare pure samples of chicken acid-sensing ion channel 1a (cASIC1) and Caenorhabditis elegans glutamate-gated chloride channel (GluCl) for X-ray crystallography, demonstrating how to rapidly and efficiently screen hundreds of constructs and accomplish large-scale expression in 4-6 weeks.

  7. Primordial Magnetic Field Effects on the CMB and Large-Scale Structure

    Directory of Open Access Journals (Sweden)

    Dai G. Yamazaki

    2010-01-01

    Full Text Available Magnetic fields are everywhere in nature, and they play an important role in every astronomical environment which involves the formation of plasma and currents. It is natural therefore to suppose that magnetic fields could be present in the turbulent high-temperature environment of the big bang. Such a primordial magnetic field (PMF would be expected to manifest itself in the cosmic microwave background (CMB temperature and polarization anisotropies, and also in the formation of large-scale structure. In this paper, we summarize the theoretical framework which we have developed to calculate the PMF power spectrum to high precision. Using this formulation, we summarize calculations of the effects of a PMF which take accurate quantitative account of the time evolution of the cutoff scale. We review the constructed numerical program, which is without approximation, and an improvement over the approach used in a number of previous works for studying the effect of the PMF on the cosmological perturbations. We demonstrate how the PMF is an important cosmological physical process on small scales. We also summarize the current constraints on the PMF amplitude Bλ and the power spectral index nB which have been deduced from the available CMB observational data by using our computational framework.

  8. The Large Scale Structure of the Galactic Magnetic Field and High Energy Cosmic Ray Anisotropy

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez-Muniz, Jaime [Department de Fisica de PartIculas, University de Santiago de Compostela, 15782 Santiago, SPAIN (Spain); Stanev, Todor [Bartol Research Institute, Department of Physics and Astronomy, University of Delaware, Newark, Delaware 19716 (United States)

    2006-10-15

    Measurements of the magnetic field in our Galaxy are complex and usually difficult to interpret. A spiral regular field in the disk is favored by observations, however the number of field reversals is still under debate. Measurements of the parity of the field across the Galactic plane are also very difficult due to the presence of the disk field itself. In this work we demonstrate that cosmic ray protons in the energy range 10{sup 18} to 10{sup 19}eV, if accelerated near the center of the Galaxy, are sensitive to the large scale structure of the Galactic Magnetic Field (GMF). In particular if the field is of even parity, and the spiral field is bi-symmetric (BSS), ultra high energy protons will predominantly come from the Southern Galactic hemisphere, and predominantly from the Northern Galactic hemisphere if the field is of even parity and axi-symmetric (ASS). There is no sensitivity to the BSS or ASS configurations if the field is of odd parity.

  9. Large scale structures in the kinetic gravity braiding model that can be unbraided

    International Nuclear Information System (INIS)

    Kimura, Rampei; Yamamoto, Kazuhiro

    2011-01-01

    We study cosmological consequences of a kinetic gravity braiding model, which is proposed as an alternative to the dark energy model. The kinetic braiding model we study is characterized by a parameter n, which corresponds to the original galileon cosmological model for n = 1. We find that the background expansion of the universe of the kinetic braiding model is the same as the Dvali-Turner's model, which reduces to that of the standard cold dark matter model with a cosmological constant (ΛCDM model) for n equal to infinity. We also find that the evolution of the linear cosmological perturbation in the kinetic braiding model reduces to that of the ΛCDM model for n = ∞. Then, we focus our study on the growth history of the linear density perturbation as well as the spherical collapse in the nonlinear regime of the density perturbations, which might be important in order to distinguish between the kinetic braiding model and the ΛCDM model when n is finite. The theoretical prediction for the large scale structure is confronted with the multipole power spectrum of the luminous red galaxy sample of the Sloan Digital Sky survey. We also discuss future prospects of constraining the kinetic braiding model using a future redshift survey like the WFMOS/SuMIRe PFS survey as well as the cluster redshift distribution in the South Pole Telescope survey

  10. THREE-POINT PHASE CORRELATIONS: A NEW MEASURE OF NONLINEAR LARGE-SCALE STRUCTURE

    Energy Technology Data Exchange (ETDEWEB)

    Wolstenhulme, Richard; Bonvin, Camille [Kavli Institute for Cosmology Cambridge and Institute of Astronomy, Madingley Road, Cambridge CB3 OHA (United Kingdom); Obreschkow, Danail [International Centre for Radio Astronomy Research (ICRAR), M468, University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009 (Australia)

    2015-05-10

    We derive an analytical expression for a novel large-scale structure observable: the line correlation function. The line correlation function, which is constructed from the three-point correlation function of the phase of the density field, is a robust statistical measure allowing the extraction of information in the nonlinear and non-Gaussian regime. We show that, in perturbation theory, the line correlation is sensitive to the coupling kernel F{sub 2}, which governs the nonlinear gravitational evolution of the density field. We compare our analytical expression with results from numerical simulations and find a 1σ agreement for separations r ≳ 30 h{sup −1} Mpc. Fitting formulae for the power spectrum and the nonlinear coupling kernel at small scales allow us to extend our prediction into the strongly nonlinear regime, where we find a 1σ agreement with the simulations for r ≳ 2 h{sup −1} Mpc. We discuss the advantages of the line correlation relative to standard statistical measures like the bispectrum. Unlike the latter, the line correlation is independent of the bias, in the regime where the bias is local and linear. Furthermore, the variance of the line correlation is independent of the Gaussian variance on the modulus of the density field. This suggests that the line correlation can probe more precisely the nonlinear regime of gravity, with less contamination from the power spectrum variance.

  11. Principal shapes and squeezed limits in the effective field theory of large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Bertolini, Daniele; Solon, Mikhail P., E-mail: dbertolini@lbl.gov, E-mail: mpsolon@lbl.gov [Berkeley Center for Theoretical Physics, University of California, South Hall Road, Berkeley, CA, 94720 (United States)

    2016-11-01

    We apply an orthogonalization procedure on the effective field theory of large scale structure (EFT of LSS) shapes, relevant for the angle-averaged bispectrum and non-Gaussian covariance of the matter power spectrum at one loop. Assuming natural-sized EFT parameters, this identifies a linear combination of EFT shapes—referred to as the principal shape—that gives the dominant contribution for the whole kinematic plane, with subdominant combinations suppressed by a few orders of magnitude. For the covariance, our orthogonal transformation is in excellent agreement with a principal component analysis applied to available data. Additionally we find that, for both observables, the coefficients of the principal shapes are well approximated by the EFT coefficients appearing in the squeezed limit, and are thus measurable from power spectrum response functions. Employing data from N-body simulations for the growth-only response, we measure the single EFT coefficient describing the angle-averaged bispectrum with Ο (10%) precision. These methods of shape orthogonalization and measurement of coefficients from response functions are valuable tools for developing the EFT of LSS framework, and can be applied to more general observables.

  12. Dark energy and modified gravity in the Effective Field Theory of Large-Scale Structure

    Science.gov (United States)

    Cusin, Giulia; Lewandowski, Matthew; Vernizzi, Filippo

    2018-04-01

    We develop an approach to compute observables beyond the linear regime of dark matter perturbations for general dark energy and modified gravity models. We do so by combining the Effective Field Theory of Dark Energy and Effective Field Theory of Large-Scale Structure approaches. In particular, we parametrize the linear and nonlinear effects of dark energy on dark matter clustering in terms of the Lagrangian terms introduced in a companion paper [1], focusing on Horndeski theories and assuming the quasi-static approximation. The Euler equation for dark matter is sourced, via the Newtonian potential, by new nonlinear vertices due to modified gravity and, as in the pure dark matter case, by the effects of short-scale physics in the form of the divergence of an effective stress tensor. The effective fluid introduces a counterterm in the solution to the matter continuity and Euler equations, which allows a controlled expansion of clustering statistics on mildly nonlinear scales. We use this setup to compute the one-loop dark-matter power spectrum.

  13. Performing a Large-Scale Modal Test on the B2 Stand Crane at NASA's Stennis Space Center

    Science.gov (United States)

    Stasiunas, Eric C.; Parks, Russel A.; Sontag, Brendan D.

    2018-01-01

    A modal test of NASA's Space Launch System (SLS) Core Stage is scheduled to occur at the Stennis Space Center B2 test stand. A derrick crane with a 150-ft long boom, located at the top of the stand, will be used to suspend the Core Stage in order to achieve defined boundary conditions. During this suspended modal test, it is expected that dynamic coupling will occur between the crane and the Core Stage. Therefore, a separate modal test was performed on the B2 crane itself, in order to evaluate the varying dynamic characteristics and correlate math models of the crane. Performing a modal test on such a massive structure was challenging and required creative test setup and procedures, including implementing both AC and DC accelerometers, and performing both classical hammer and operational modal analysis. This paper describes the logistics required to perform this large-scale test, as well as details of the test setup, the modal test methods used, and an overview and application of the results.

  14. Large Scale Model Test Investigation on Wave Run-Up in Irregular Waves at Slender Piles

    DEFF Research Database (Denmark)

    Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke

    2013-01-01

    An experimental large scale study on wave run-up generated loads on entrance platforms for offshore wind turbines was performed. The experiments were performed at GrosserWellenkanal (GWK), Forschungszentrum Küste (FZK) in Hannover, Germany. The present paper deals with the run-up heights determin...

  15. Forced vibration test on large scale model on soft rock site

    International Nuclear Information System (INIS)

    Kobayashi, Toshio; Fukuoka, Atsunobu; Izumi, Masanori; Miyamoto, Yuji; Ohtsuka, Yasuhiro; Nasuda, Toshiaki.

    1991-01-01

    Forced vibration tests were conducted in order to investigate the embedment effect on dynamic soil-structure interaction. Two model structures were constructed on actual soil about 60 m apart, after excavating the ground to 5 m depth. For both models, the sinusoidal forced vibration tests were performed with the conditions of different embedment depth, namely non-embedment, half-embedment and full-embedment. As the test results, the increase in both natural frequency and damping factor due to the embedment effects can be observed, and the soil impedances calculated from test results are discussed. (author)

  16. Distribution of ground rigidity and ground model for seismic response analysis in Hualian project of large scale seismic test

    International Nuclear Information System (INIS)

    Kokusho, T.; Nishi, K.; Okamoto, T.; Tanaka, Y.; Ueshima, T.; Kudo, K.; Kataoka, T.; Ikemi, M.; Kawai, T.; Sawada, Y.; Suzuki, K.; Yajima, K.; Higashi, S.

    1997-01-01

    An international joint research program called HLSST is proceeding. HLSST is large-scale seismic test (LSST) to investigate soil-structure interaction (SSI) during large earthquake in the field in Hualien, a high seismic region in Taiwan. A 1/4-scale model building was constructed on the gravelly soil in this site, and the backfill material of crushed stone was placed around the model plant after excavation for the construction. Also the model building and the foundation ground were extensively instrumental to monitor structure and ground response. To accurately evaluate SSI during earthquakes, geotechnical investigation and forced vibration test were performed during construction process namely before/after base excavation, after structure construction and after backfilling. And the distribution of the mechanical properties of the gravelly soil and the backfill are measured after the completion of the construction by penetration test and PS-logging etc. This paper describes the distribution and the change of the shear wave velocity (V s ) measured by the field test. Discussion is made on the effect of overburden pressure during the construction process on V s in the neighbouring soil and, further on the numerical soil model for SSI analysis. (orig.)

  17. Electronic structure and aromaticity of large-scale hexagonal graphene nanoflakes

    International Nuclear Information System (INIS)

    Hu, Wei; Yang, Chao; Lin, Lin; Yang, Jinlong

    2014-01-01

    With the help of the recently developed SIESTA-pole (Spanish Initiative for Electronic Simulations with Thousands of Atoms) - PEXSI (pole expansion and selected inversion) method [L. Lin, A. García, G. Huhs, and C. Yang, J. Phys.: Condens. Matter 26, 305503 (2014)], we perform Kohn-Sham density functional theory calculations to study the stability and electronic structure of hydrogen passivated hexagonal graphene nanoflakes (GNFs) with up to 11 700 atoms. We find the electronic properties of GNFs, including their cohesive energy, edge formation energy, highest occupied molecular orbital-lowest unoccupied molecular orbital energy gap, edge states, and aromaticity, depend sensitively on the type of edges (armchair graphene nanoflakes (ACGNFs) and zigzag graphene nanoflakes (ZZGNFs)), size and the number of electrons. We observe that, due to the edge-induced strain effect in ACGNFs, large-scale ACGNFs’ edge formation energy decreases as their size increases. This trend does not hold for ZZGNFs due to the presence of many edge states in ZZGNFs. We find that the energy gaps E g of GNFs all decay with respect to 1/L, where L is the size of the GNF, in a linear fashion. But as their size increases, ZZGNFs exhibit more localized edge states. We believe the presence of these states makes their gap decrease more rapidly. In particular, when L is larger than 6.40 nm, we find that ZZGNFs exhibit metallic characteristics. Furthermore, we find that the aromatic structures of GNFs appear to depend only on whether the system has 4N or 4N + 2 electrons, where N is an integer

  18. Electronic structure and aromaticity of large-scale hexagonal graphene nanoflakes

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei, E-mail: whu@lbl.gov, E-mail: linlin@lbl.gov, E-mail: cyang@lbl.gov, E-mail: jlyang@ustc.edu.cn; Yang, Chao, E-mail: whu@lbl.gov, E-mail: linlin@lbl.gov, E-mail: cyang@lbl.gov, E-mail: jlyang@ustc.edu.cn [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: whu@lbl.gov, E-mail: linlin@lbl.gov, E-mail: cyang@lbl.gov, E-mail: jlyang@ustc.edu.cn [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States); Yang, Jinlong, E-mail: whu@lbl.gov, E-mail: linlin@lbl.gov, E-mail: cyang@lbl.gov, E-mail: jlyang@ustc.edu.cn [Hefei National Laboratory for Physical Sciences at Microscale and Department of Chemical Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2014-12-07

    With the help of the recently developed SIESTA-pole (Spanish Initiative for Electronic Simulations with Thousands of Atoms) - PEXSI (pole expansion and selected inversion) method [L. Lin, A. García, G. Huhs, and C. Yang, J. Phys.: Condens. Matter 26, 305503 (2014)], we perform Kohn-Sham density functional theory calculations to study the stability and electronic structure of hydrogen passivated hexagonal graphene nanoflakes (GNFs) with up to 11 700 atoms. We find the electronic properties of GNFs, including their cohesive energy, edge formation energy, highest occupied molecular orbital-lowest unoccupied molecular orbital energy gap, edge states, and aromaticity, depend sensitively on the type of edges (armchair graphene nanoflakes (ACGNFs) and zigzag graphene nanoflakes (ZZGNFs)), size and the number of electrons. We observe that, due to the edge-induced strain effect in ACGNFs, large-scale ACGNFs’ edge formation energy decreases as their size increases. This trend does not hold for ZZGNFs due to the presence of many edge states in ZZGNFs. We find that the energy gaps E{sub g} of GNFs all decay with respect to 1/L, where L is the size of the GNF, in a linear fashion. But as their size increases, ZZGNFs exhibit more localized edge states. We believe the presence of these states makes their gap decrease more rapidly. In particular, when L is larger than 6.40 nm, we find that ZZGNFs exhibit metallic characteristics. Furthermore, we find that the aromatic structures of GNFs appear to depend only on whether the system has 4N or 4N + 2 electrons, where N is an integer.

  19. Electronic structure and aromaticity of large-scale hexagonal graphene nanoflakes.

    Science.gov (United States)

    Hu, Wei; Lin, Lin; Yang, Chao; Yang, Jinlong

    2014-12-07

    With the help of the recently developed SIESTA-pole (Spanish Initiative for Electronic Simulations with Thousands of Atoms) - PEXSI (pole expansion and selected inversion) method [L. Lin, A. García, G. Huhs, and C. Yang, J. Phys.: Condens. Matter 26, 305503 (2014)], we perform Kohn-Sham density functional theory calculations to study the stability and electronic structure of hydrogen passivated hexagonal graphene nanoflakes (GNFs) with up to 11,700 atoms. We find the electronic properties of GNFs, including their cohesive energy, edge formation energy, highest occupied molecular orbital-lowest unoccupied molecular orbital energy gap, edge states, and aromaticity, depend sensitively on the type of edges (armchair graphene nanoflakes (ACGNFs) and zigzag graphene nanoflakes (ZZGNFs)), size and the number of electrons. We observe that, due to the edge-induced strain effect in ACGNFs, large-scale ACGNFs' edge formation energy decreases as their size increases. This trend does not hold for ZZGNFs due to the presence of many edge states in ZZGNFs. We find that the energy gaps E(g) of GNFs all decay with respect to 1/L, where L is the size of the GNF, in a linear fashion. But as their size increases, ZZGNFs exhibit more localized edge states. We believe the presence of these states makes their gap decrease more rapidly. In particular, when L is larger than 6.40 nm, we find that ZZGNFs exhibit metallic characteristics. Furthermore, we find that the aromatic structures of GNFs appear to depend only on whether the system has 4N or 4N + 2 electrons, where N is an integer.

  20. Recent Developments in Language Assessment and the Case of Four Large-Scale Tests of ESOL Ability

    Science.gov (United States)

    Stoynoff, Stephen

    2009-01-01

    This review article surveys recent developments and validation activities related to four large-scale tests of L2 English ability: the iBT TOEFL, the IELTS, the FCE, and the TOEIC. In addition to describing recent changes to these tests, the paper reports on validation activities that were conducted on the measures. The results of this research…

  1. Large-Scale Academic Achievement Testing of Deaf and Hard-of-Hearing Students: Past, Present, and Future

    Science.gov (United States)

    Qi, Sen; Mitchell, Ross E.

    2012-01-01

    The first large-scale, nationwide academic achievement testing program using Stanford Achievement Test (Stanford) for deaf and hard-of-hearing children in the United States started in 1969. Over the past three decades, the Stanford has served as a benchmark in the field of deaf education for assessing student academic achievement. However, the…

  2. Large scale seismic test research at Hualien site in Taiwan. Results of site investigation and characterization of the foundation ground

    International Nuclear Information System (INIS)

    Okamoto, Toshiro; Kokusho, Takeharu; Nishi, Koichi

    1998-01-01

    An international joint research program called ''HLSST'' is under way. Large-Scale Seismic Test (LSST) is to be conducted to investigate Soil-Structure Interaction (SSI) during large earthquakes in the field in Hualien, a high seismic region in Taiwan. A 1/4-scale model building was constructed on the excavated gravelly ground, and the backfill material of crushed stones was placed around the model plant. The model building and the foundation ground were extensively instrumented to monitor structure and ground response. To accurately evaluate SSI during earthquakes, geotechnical investigation and forced vibration test were performed during construction process namely before/after the base excavation, after the structure construction and after the backfilling. Main results are as follows. (1) The distribution of the mechanical properties of the gravelly soil are measured by various techniques including penetration tests and PS-logging and it found that the shear wave velocities (Vs) change clearly and it depends on changing overburden pressures during the construction process. (2) Measuring Vs in the surrounding soils, it found that the Vs is smaller than that at almost same depth in the farther location. Discussion is made further on the numerical soil model for SSI analysis. (author)

  3. Large-Scale Structure and Dynamics of the Sub-Auroral Polarization Stream (SAPS)

    Science.gov (United States)

    Baker, J. B. H.; Nishitani, N.; Kunduri, B.; Ruohoniemi, J. M.; Sazykin, S. Y.

    2017-12-01

    The Sub-Auroral Polarization Stream (SAPS) is a narrow channel of high-speed westward ionospheric convection which appears equatorward of the duskside auroral oval during geomagnetically active periods. SAPS is generally thought to occur when the partial ring current intensifies and enhanced region-2 field-aligned currents (FACs) are forced to close across the low conductance region of the mid-latitude ionospheric trough. However, recent studies have suggested SAPS can also occur during non-storm periods, perhaps associated with substorm activity. In this study, we used measurements from mid-latitude SuperDARN radars to examine the large-scale structure and dynamics of SAPS during several geomagnetically active days. Linear correlation analysis applied across all events suggests intensifications of the partial ring current (ASYM-H index) and auroral activity (AL index) are both important driving influences for controlling the SAPS speed. Specifically, SAPS flows increase, on average, by 20-40 m/s per 10 nT of ASYM-H and 10-30 m/s per 100 nT of AL. These dependencies tend to be stronger during the storm recovery phase. There is also a strong local time dependence such that the strength of SAPS flows decrease by 70-80 m/s for each hour of local time moving from dusk to midnight. By contrast, the evidence for direct solar wind control of SAPS speed is much less consistent, with some storms showing strong correlations with the interplanetary electric field components and/or solar wind dynamic pressure, while others do not. These results are discussed in the context of recent simulation results from the Rice Convection Model (RCM).

  4. Large-scale structural alteration of brain in epileptic children with SCN1A mutation.

    Science.gov (United States)

    Lee, Yun-Jeong; Yum, Mi-Sun; Kim, Min-Jee; Shim, Woo-Hyun; Yoon, Hee Mang; Yoo, Il Han; Lee, Jiwon; Lim, Byung Chan; Kim, Ki Joong; Ko, Tae-Sung

    2017-01-01

    Mutations in SCN1A gene encoding the alpha 1 subunit of the voltage gated sodium channel are associated with several epilepsy syndromes including genetic epilepsy with febrile seizures plus (GEFS +) and severe myoclonic epilepsy of infancy (SMEI). However, in most patients with SCN1A mutation, brain imaging has reported normal or non-specific findings including cerebral or cerebellar atrophy. The aim of this study was to investigate differences in brain morphometry in epileptic children with SCN1A mutation compared to healthy control subjects. We obtained cortical morphology (thickness, and surface area) and brain volume (global, subcortical, and regional) measurements using FreeSurfer (version 5.3.0, https://surfer.nmr.mgh.harvard.edu) and compared measurements of children with epilepsy and SCN1A gene mutation ( n  = 21) with those of age and gender matched healthy controls ( n  = 42). Compared to the healthy control group, children with epilepsy and SCN1A gene mutation exhibited smaller total brain, total gray matter and white matter, cerebellar white matter, and subcortical volumes, as well as mean surface area and mean cortical thickness. A regional analysis revealed significantly reduced gray matter volume in the patient group in the bilateral inferior parietal, left lateral orbitofrontal, left precentral, right postcentral, right isthmus cingulate, right middle temporal area with smaller surface area and white matter volume in some of these areas. However, the regional cortical thickness was not significantly different in two groups. This study showed large-scale developmental brain changes in patients with epilepsy and SCN1A gene mutation, which may be associated with the core symptoms of the patients. Further longitudinal MRI studies with larger cohorts are required to confirm the effect of SCN1A gene mutation on structural brain development.

  5. On the Contribution of Large-Scale Structure to Strong Gravitational Lensing

    Science.gov (United States)

    Faure, C.; Kneib, J.-P.; Hilbert, S.; Massey, R.; Covone, G.; Finoguenov, A.; Leauthaud, A.; Taylor, J. E.; Pires, S.; Scoville, N.; Koekemoer, Anton M.

    2009-04-01

    We study the correlation between the locations of galaxy-galaxy strong-lensing candidates and tracers of large-scale structure from both weak lensing (WL) or X-ray emission. The Cosmological Evolution Survey (COSMOS) is a unique data set, combining deep, high resolution and contiguous imaging in which strong lenses have been discovered, plus unparalleled multiwavelength coverage. To help interpret the COSMOS data, we have also produced mock COSMOS strong- and WL observations, based on ray-tracing through the Millennium Simulation. In agreement with the simulations, we find that strongly lensed images with the largest angular separations are found in the densest regions of the COSMOS field. This is explained by a prevalence among the lens population in dense environments of elliptical galaxies with high total-to-stellar mass ratios, which can deflect light through larger angles. However, we also find that the overall fraction of elliptical galaxies with strong gravitational lensing is independent of the local mass density; this observation is not true of the simulations, which predict an increasing fraction of strong lenses in dense environments. The discrepancy may be a real effect, but could also be explained by various limitations of our analysis. For example, our visual search of strong lens systems could be incomplete and suffer from selection bias; the luminosity function of elliptical galaxies may differ between our real and simulated data; or the simplifying assumptions and approximations used in our lensing simulations may be inadequate. Work is therefore ongoing. Automated searches for strong lens systems will be particularly important in better constraining the selection function.

  6. Lightweight electric-powered vehicles. Which financial incentives after the large-scale field tests at Mendrisio?

    International Nuclear Information System (INIS)

    Keller, M.; Frick, R.; Hammer, S.

    1999-08-01

    How should lightweight electric-powered vehicles be promoted, after the large-scale fleet test being conducted at Mendrisio (southern Switzerland) is completed in 2001, and are there reasons to put question marks behind the current approach? The demand for electric vehicles, and particularly the one in the automobile category, has remained at a persistently low level. As it proved, any appreciable improvement of this situation is almost impossible, even with substantial financial incentives. However, the unsatisfactory sales figures have little to do with the nature of the fleet test itself or with the specific conditions at Mendrisio. The problem is rather of structural nature. For (battery-operated) electric cars the main problem at present is the lack of an expanding market which could become self-supporting with only a few additional incentives. Various strategies have been evaluated. Two alternatives were considered in particular: a strategy to promote explicitly electric vehicles ('EL-strategy'), and a strategy to promote efficient road vehicles in general which would have to meet specific energy and environmental-efficiency criteria ('EF-strategy'). The EL-strategies make the following dilemma clear. If the aim is to raise the share of these vehicles up to 5% of all cars on the road (or even 8%) in a mid-term prospect, then substantial interventions in the relevant vehicle markets would be required, either with penalties for conventional cars, or a large-scale funding scheme, or interventions at the supply level. The study suggests a differentiated strategy with two components: (i) 'institutionalised' promotion with the aim of a substantial increase of the share of 'efficient' vehicles (independently of the propulsion technology), and (ii) the continuation of pilot and demonstration projects for the promotion of different types of innovative technologies. (author) [de

  7. Test and Analysis of a Buckling-Critical Large-Scale Sandwich Composite Cylinder

    Science.gov (United States)

    Schultz, Marc R.; Sleight, David W.; Gardner, Nathaniel W.; Rudd, Michelle T.; Hilburger, Mark W.; Palm, Tod E.; Oldfield, Nathan J.

    2018-01-01

    Structural stability is an important design consideration for launch-vehicle shell structures and it is well known that the buckling response of such shell structures can be very sensitive to small geometric imperfections. As part of an effort to develop new buckling design guidelines for sandwich composite cylindrical shells, an 8-ft-diameter honeycomb-core sandwich composite cylinder was tested under pure axial compression to failure. The results from this test are compared with finite-element-analysis predictions and overall agreement was very good. In particular, the predicted buckling load was within 1% of the test and the character of the response matched well. However, it was found that the agreement could be improved by including composite material nonlinearity in the analysis, and that the predicted buckling initiation site was sensitive to the addition of small bending loads to the primary axial load in analyses.

  8. Understanding Large-scale Structure in the SSA22 Protocluster Region Using Cosmological Simulations

    Science.gov (United States)

    Topping, Michael W.; Shapley, Alice E.; Steidel, Charles C.; Naoz, Smadar; Primack, Joel R.

    2018-01-01

    We investigate the nature and evolution of large-scale structure within the SSA22 protocluster region at z = 3.09 using cosmological simulations. A redshift histogram constructed from current spectroscopic observations of the SSA22 protocluster reveals two separate peaks at z = 3.065 (blue) and z = 3.095 (red). Based on these data, we report updated overdensity and mass calculations for the SSA22 protocluster. We find {δ }b,{gal}=4.8+/- 1.8 and {δ }r,{gal}=9.5+/- 2.0 for the blue and red peaks, respectively, and {δ }t,{gal}=7.6+/- 1.4 for the entire region. These overdensities correspond to masses of {M}b=(0.76+/- 0.17)× {10}15{h}-1 {M}ȯ , {M}r=(2.15+/- 0.32)× {10}15{h}-1 {M}ȯ , and {M}t=(3.19+/- 0.40)× {10}15{h}-1 {M}ȯ for the red, blue, and total peaks, respectively. We use the Small MultiDark Planck (SMDPL) simulation to identify comparably massive z∼ 3 protoclusters, and uncover the underlying structure and ultimate fate of the SSA22 protocluster. For this analysis, we construct mock redshift histograms for each simulated z∼ 3 protocluster, quantitatively comparing them with the observed SSA22 data. We find that the observed double-peaked structure in the SSA22 redshift histogram corresponds not to a single coalescing cluster, but rather the proximity of a ∼ {10}15{h}-1 {M}ȯ protocluster and at least one > {10}14{h}-1 {M}ȯ cluster progenitor. Such associations in the SMDPL simulation are easily understood within the framework of hierarchical clustering of dark matter halos. We finally find that the opportunity to observe such a phenomenon is incredibly rare, with an occurrence rate of 7.4{h}3 {{{Gpc}}}-3. Based on data obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration, and was made possible by the generous financial support of the W.M. Keck Foundation.

  9. Computed versus measured response of HDR reactor building in large scale shaking tests

    International Nuclear Information System (INIS)

    Werkle, H.; Waas, G.

    1987-01-01

    The earthquake resistant design of NPP structures and their installations is commonly based on linear analysis methods. Nonlinear effects, which may occur during strong earthquakes, are approximately accounted for in the analysis by adjusting the structural damping values. Experimental investigations of nonlinear effects were performed with an extremely heavy shaker at the decommissioned HDR reactor building in West Germany. The tests were directed by KfK (Nuclear Research Center Karlsruhe, West Germany) and supported by several companies and institutes from West Germany, Switzerland and the USA. The objective was the dynamic repsonse behaviour of the structure, piping and components to strong earthquake-like shaking including nonlinear effects. This paper presents some results of safety analyses and measurements, which were performed prior and during the test series. It was intended to shake the building up to a level where only a marginal safety against global structural failure was left

  10. IoT European Large-Scale Pilots – Integration, Experimentation and Testing

    OpenAIRE

    Guillén, Sergio Gustavo; Sala, Pilar; Fico, Giuseppe; Arredondo, Maria Teresa; Cano, Alicia; Posada, Jorge; Gutierrez, Germán; Palau, Carlos; Votis, Konstantinos; Verdouw, Cor N.; Wolfert, Sjaak; Beers, George; Sundmaeker, Harald; Chatzikostas, Grigoris; Ziegler, Sébastien

    2017-01-01

    The IoT European Large-Scale Pilots Programme includes the innovation consortia that are collaborating to foster the deployment of IoT solutions in Europe through the integration of advanced IoT technologies across the value chain, demonstration of multiple IoT applications at scale and in a usage context, and as close as possible to operational conditions. The programme projects are targeted, goal-driven initiatives that propose IoT approaches to specific real-life industrial/societal challe...

  11. Simulation test of PIUS-type reactor with large scale experimental apparatus

    International Nuclear Information System (INIS)

    Tamaki, M.; Tsuji, Y.; Ito, T.; Tasaka, K.; Kukita, Yutaka

    1995-01-01

    A large scale experimental apparatus for simulating the PIUS-type reactor has been constructed keeping the volumetric scaling ratio to the realistic reactor model. Fundamental experiments such as a steady state operation and a pump trip simulation were performed. Experimental results were compared with those obtained by the small scale apparatus in JAERI. We have already reported the effectiveness of the feedback control for the primary loop pump speed (PI control) for the stable operation. In this paper this feedback system is modified and the PID control is introduced. This new system worked well for the operation of the PIUS-type reactor even in a rapid transient condition. (author)

  12. Testing of a Stitched Composite Large-Scale Multi-Bay Pressure Box

    Science.gov (United States)

    Jegley, Dawn; Rouse, Marshall; Przekop, Adam; Lovejoy, Andrew

    2016-01-01

    NASA has created the Environmentally Responsible Aviation (ERA) Project to develop technologies to reduce aviation's impact on the environment. A critical aspect of this pursuit is the development of a lighter, more robust airframe to enable the introduction of unconventional aircraft configurations. NASA and The Boeing Company have worked together to develop a structural concept that is lightweight and an advancement beyond state-of-the-art composite structures. The Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) is an integrally stiffened panel design where elements are stitched together. The PRSEUS concept is designed to maintain residual load carrying capabilities under a variety of damage scenarios. A series of building block tests were evaluated to explore the fundamental assumptions related to the capability and advantages of PRSEUS panels. The final step in the building block series is an 80%-scale pressure box representing a portion of the center section of a Hybrid Wing Body (HWB) transport aircraft. The testing of this article under maneuver load and internal pressure load conditions is the subject of this paper. The experimental evaluation of this article, along with the other building block tests and the accompanying analyses, has demonstrated the viability of a PRSEUS center body for the HWB vehicle. Additionally, much of the development effort is also applicable to traditional tube-and-wing aircraft, advanced aircraft configurations, and other structures where weight and through-the-thickness strength are design considerations.

  13. Reconstruction of a large-scale reconnection exhaust structure in the solar wind

    Directory of Open Access Journals (Sweden)

    W.-L. Teh

    2009-02-01

    Full Text Available We recover two-dimensional (2-D magnetic field and flow field configurations from three spacecraft encounters with a single large-scale reconnection exhaust structure in the solar wind, using a new reconstruction method (Sonnerup and Teh, 2008 based on the ideal single-fluid MHD equations in a steady-state, 2-D geometry. The reconstruction is performed in the rest frame of the X-line, where the flow into, and the plasma jetting within, the exhaust region are clearly visible. The event was first identified by Phan et al. (2006 in the ACE, Cluster, and Wind data sets; they argued that quasi-steady reconnection persisted for over 2 h at a long (390 RE X-line. The reconnection exhaust is sandwiched between two discontinuities, both of which contain elements of intermediate- and slow-mode behavior; these elements are co-located rather than being spatially separated. These composite discontinuities do not satisfy the coplanarity condition or the standard MHD jump conditions. For all three spacecraft, the Walén regression line slope was positive (negative for the leading (trailing discontinuity. Our MHD reconstruction shows that: (1 the X-line orientation was close to the bisector of the overall magnetic shear angle and exhibited a slow rotating motion toward the Sun-Earth line; (2 the X-line moved earthward, dawnward, and southward; (3 the reconnection electric field was small (~0.02 mV/m on average and gradually decreased from the first crossing (ACE to the last (Wind. The magnetic field and flow field configurations recovered from ACE and Cluster are similar while those recovered from Wind also include a magnetic island and an associated vortex. Reconnection persisted for at least 2.4 h involving inflow into the exhaust region from its two sides. Time-dependence in the reconnection electric fields seen by ACE and Wind indicates local temporal variations in the field configuration. In addition to the reconstruction results, we

  14. Reconstruction of a large-scale reconnection exhaust structure in the solar wind

    International Nuclear Information System (INIS)

    Teh, W.L.; Sonnerup, B.U.Oe.; Hu, Q.; Farrugia, C.J.

    2009-01-01

    We recover two-dimensional (2-D) magnetic field and flow field configurations from three spacecraft encounters with a single large-scale reconnection exhaust structure in the solar wind, using a new reconstruction method (Sonnerup and Teh, 2008) based on the ideal single-fluid MHD equations in a steady-state, 2-D geometry. The reconstruction is performed in the rest frame of the X-line, where the flow into, and the plasma jetting within, the exhaust region are clearly visible. The event was first identified by Phan et al. (2006) in the ACE, Cluster, and Wind data sets; they argued that quasi-steady reconnection persisted for over 2 h at a long (390 R E ) X-line. The reconnection exhaust is sandwiched between two discontinuities, both of which contain elements of intermediate- and slow-mode behavior; these elements are co-located rather than being spatially separated. These composite discontinuities do not satisfy the coplanarity condition or the standard MHD jump conditions. For all three spacecraft, the Walen regression line slope was positive (negative) for the leading (trailing) discontinuity. Our MHD reconstruction shows that: (1) the X-line orientation was close to the bisector of the overall magnetic shear angle and exhibited a slow rotating motion toward the Sun-Earth line; (2) the X-line moved earthward, dawnward, and southward; (3) the reconnection electric field was small (∝0.02 mV/m on average) and gradually decreased from the first crossing (ACE) to the last (Wind). The magnetic field and flow field configurations recovered from ACE and Cluster are similar while those recovered from Wind also include a magnetic island and an associated vortex. Reconnection persisted for at least 2.4 h involving inflow into the exhaust region from its two sides. Time-dependence in the reconnection electric fields seen by ACE and Wind indicates local temporal variations in the field configuration. In addition to the reconstruction results, we provide a description and

  15. Reconstruction of a large-scale reconnection exhaust structure in the solar wind

    Directory of Open Access Journals (Sweden)

    W.-L. Teh

    2009-02-01

    Full Text Available We recover two-dimensional (2-D magnetic field and flow field configurations from three spacecraft encounters with a single large-scale reconnection exhaust structure in the solar wind, using a new reconstruction method (Sonnerup and Teh, 2008 based on the ideal single-fluid MHD equations in a steady-state, 2-D geometry. The reconstruction is performed in the rest frame of the X-line, where the flow into, and the plasma jetting within, the exhaust region are clearly visible. The event was first identified by Phan et al. (2006 in the ACE, Cluster, and Wind data sets; they argued that quasi-steady reconnection persisted for over 2 h at a long (390 RE X-line. The reconnection exhaust is sandwiched between two discontinuities, both of which contain elements of intermediate- and slow-mode behavior; these elements are co-located rather than being spatially separated. These composite discontinuities do not satisfy the coplanarity condition or the standard MHD jump conditions. For all three spacecraft, the Walén regression line slope was positive (negative for the leading (trailing discontinuity. Our MHD reconstruction shows that: (1 the X-line orientation was close to the bisector of the overall magnetic shear angle and exhibited a slow rotating motion toward the Sun-Earth line; (2 the X-line moved earthward, dawnward, and southward; (3 the reconnection electric field was small (~0.02 mV/m on average and gradually decreased from the first crossing (ACE to the last (Wind. The magnetic field and flow field configurations recovered from ACE and Cluster are similar while those recovered from Wind also include a magnetic island and an associated vortex. Reconnection persisted for at least 2.4 h involving inflow into the exhaust region from its two sides. Time-dependence in the reconnection electric fields seen by ACE and Wind indicates local temporal variations in the field configuration. In addition to the reconstruction results, we provide a description

  16. DEMNUni: the clustering of large-scale structures in the presence of massive neutrinos

    International Nuclear Information System (INIS)

    Castorina, Emanuele; Carbone, Carmelita; Bel, Julien; Sefusatti, Emiliano; Dolag, Klaus

    2015-01-01

    We analyse the clustering features of Large Scale Structures (LSS) in the presence of massive neutrinos, employing a set of large-volume, high-resolution cosmological N-body simulations, where neutrinos are treated as separate collisionless particles. The volume of 8 h -3 Gpc 3 , combined with a resolution of about 8×10 10 h -1 M ⊚ for the cold dark matter (CDM) component, represents a significant improvement over previous N-body simulations in massive neutrino cosmologies. In this work we focus, in the first place, on the analysis of nonlinear effects in CDM and neutrinos perturbations contributing to the total matter power spectrum. We show that most of the nonlinear evolution is generated exclusively by the CDM component. We therefore compare mildly nonlinear predictions from Eulerian Perturbation Theory (PT), and fully nonlinear prescriptions (HALOFIT) with the measurements obtained from the simulations. We find that accounting only for the nonlinear evolution of the CDM power spectrum allows to recover the total matter power spectrum with the same accuracy as the massless case. Indeed, we show that, the most recent version of the (HALOFIT) formula calibrated on ΛCDM simulations can be applied directly to the linear CDM power spectrum without requiring additional fitting parameters in the massive case. As a second step, we study the abundance and clustering properties of CDM halos, confirming that, in massive neutrino cosmologies, the proper definition of the halo bias should be made with respect to the cold rather than the total matter distribution, as recently shown in the literature. Here we extend these results to the redshift space, finding that, when accounting for massive neutrinos, an improper definition of the linear bias can lead to a systematic error of about 1-2 % in the determination of the linear growth rate from anisotropic clustering. This result is quite important if we consider that future spectroscopic galaxy surveys, as e.g. Euclid, are

  17. Proportional and Integral Thermal Control System for Large Scale Heating Tests

    Science.gov (United States)

    Fleischer, Van Tran

    2015-01-01

    The National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) Flight Loads Laboratory is a unique national laboratory that supports thermal, mechanical, thermal/mechanical, and structural dynamics research and testing. A Proportional Integral thermal control system was designed and implemented to support thermal tests. A thermal control algorithm supporting a quartz lamp heater was developed based on the Proportional Integral control concept and a linearized heating process. The thermal control equations were derived and expressed in terms of power levels, integral gain, proportional gain, and differences between thermal setpoints and skin temperatures. Besides the derived equations, user's predefined thermal test information generated in the form of thermal maps was used to implement the thermal control system capabilities. Graphite heater closed-loop thermal control and graphite heater open-loop power level were added later to fulfill the demand for higher temperature tests. Verification and validation tests were performed to ensure that the thermal control system requirements were achieved. This thermal control system has successfully supported many milestone thermal and thermal/mechanical tests for almost a decade with temperatures ranging from 50 F to 3000 F and temperature rise rates from -10 F/s to 70 F/s for a variety of test articles having unique thermal profiles and test setups.

  18. Large scale waste combustion projects. A study of financial structures and sensitivities

    International Nuclear Information System (INIS)

    Brandler, A.

    1993-01-01

    The principal objective of the study was to determine the key contractual and financial aspects of large scale energy-from-waste projects, and to provide the necessary background information on financing to appreciate the approach lenders take when they consider financing waste combustion projects. An integral part of the study has been the preparation of a detailed financial model, incorporating all major financing parameters, to assess the economic and financial viability of typical waste combustion projects. (author)

  19. Contribution of large-scale coherent structures towards the cross flow in two interconnected channels

    International Nuclear Information System (INIS)

    Mahmood, A.; Rohde, M.; Hagen, T.H.J.J. van der; Mudde, R.F.

    2009-01-01

    Single phase cross flow through a gap region joining two vertical channels has been investigated experimentally for Reynolds numbers, based on the channels hydraulic diameter, ranging from 850 to 21000. The flow field in the gap region is investigated by 2D-PIV and the inter channel mass transfer is quantified by the tracer injection method. Experiments carried out for variable gap heights and shape show the existence of a street of large-scale counter rotating vortices on either side of the channel-gap interface, resulting from the mean velocity gradient in the gap and the main channel region. The appearance of the coherent vortices is subject to a threshold associated with the difference between the maximum and the minimum average stream wise velocities in the channel and the gap region, respectively. The auto power spectral density of the cross velocity component in the gap region exhibits a slope of -3 in the inertial range, indicating the 2D nature of these vortices. The presence of the large-scale vortices enhances the mass transfer through the gap region by approximately 63% of the mass transferred by turbulent mixing alone. The inter-channel mass transfer, due to cross flow, is found to be dependent not only on the large-scale vortices characteristics, but also on the gap geometry. (author)

  20. Primordial Non-Gaussianity and Bispectrum Measurements in the Cosmic Microwave Background and Large-Scale Structure

    Directory of Open Access Journals (Sweden)

    Michele Liguori

    2010-01-01

    Full Text Available The most direct probe of non-Gaussian initial conditions has come from bispectrum measurements of temperature fluctuations in the Cosmic Microwave Background and of the matter and galaxy distribution at large scales. Such bispectrum estimators are expected to continue to provide the best constraints on the non-Gaussian parameters in future observations. We review and compare the theoretical and observational problems, current results, and future prospects for the detection of a nonvanishing primordial component in the bispectrum of the Cosmic Microwave Background and large-scale structure, and the relation to specific predictions from different inflationary models.

  1. Evaluation of defect density by top-view large scale AFM on metamorphic structures grown by MOVPE

    Energy Technology Data Exchange (ETDEWEB)

    Gocalinska, Agnieszka, E-mail: agnieszka.gocalinska@tyndall.ie; Manganaro, Marina; Dimastrodonato, Valeria; Pelucchi, Emanuele

    2015-09-15

    Highlights: • Metamorphic buffer layers of In{sub x}Ga{sub 1−x}As were grown by MOVPE and characterised by AFM and TEM. • It was found that AFM provides sufficient information to estimate threading defect density in metamorphic structures, even when significant roughness is present. • When planar-view TEM is lacking, a combination of cross-sectional TEM and large scale AFM can provide good evaluation of the material quality. • It is fast, cheap and non-destructive – can be very useful in development process of complicated structures, requiring multiple test growths and characterisation. - Abstract: We demonstrate an atomic force microscopy based method for estimation of defect density by identification of threading dislocations on a non-flat surface resulting from metamorphic growth. The discussed technique can be applied as an everyday evaluation tool for the quality of epitaxial structures and allow for cost reduction, as it lessens the amount of the transmission electron microscopy analysis required at the early stages of projects. Metamorphic structures with low surface defectivities (below 10{sup 6}) were developed successfully with the application of the technique, proving its usefulness in process optimisation.

  2. An approach to large scale identification of non-obvious structural similarities between proteins

    Science.gov (United States)

    Cherkasov, Artem; Jones, Steven JM

    2004-01-01

    Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence. PMID:15147578

  3. An approach to large scale identification of non-obvious structural similarities between proteins

    Directory of Open Access Journals (Sweden)

    Cherkasov Artem

    2004-05-01

    Full Text Available Abstract Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence.

  4. Large-scale testing of women in Copenhagen has not reduced the prevalence of Chlamydia trachomatis infections

    DEFF Research Database (Denmark)

    Westh, Henrik Torkil; Kolmos, H J

    2003-01-01

    OBJECTIVE: To examine the impact of a stable, large-scale enzyme immunoassay (EIA) Chlamydia trachomatis testing situation in Copenhagen, and to estimate the impact of introducing a genomic-based assay with higher sensitivity and specificity. METHODS: Over a five-year study period, 25 305-28 505...... and negative predictive values of the Chlamydia test result, new screening strategies for both men and women in younger age groups will be necessary if chlamydial infections are to be curtailed....

  5. Technique for large-scale structural mapping at uranium deposits i in non-metamorphosed sedimentary cover rocks

    International Nuclear Information System (INIS)

    Kochkin, B.T.

    1985-01-01

    The technique for large-scale construction (1:1000 - 1:10000), reflecting small amplitude fracture plicate structures, is given for uranium deposits in non-metamorphozed sedimentary cover rocks. Structure drill log sections, as well as a set of maps with the results of area analysis of hidden disturbances, structural analysis of iso-pachous lines and facies of platform mantle horizons serve as sour ce materials for structural mapplotting. The steps of structural map construction are considered: 1) structural carcass construction; 2) reconstruction of structure contour; 3) time determination of structure initiation; 4) plotting of an additional geologic load

  6. PubChemQC Project: A Large-Scale First-Principles Electronic Structure Database for Data-Driven Chemistry.

    Science.gov (United States)

    Nakata, Maho; Shimazaki, Tomomi

    2017-06-26

    Large-scale molecular databases play an essential role in the investigation of various subjects such as the development of organic materials, in silico drug design, and data-driven studies with machine learning. We have developed a large-scale quantum chemistry database based on first-principles methods. Our database currently contains the ground-state electronic structures of 3 million molecules based on density functional theory (DFT) at the B3LYP/6-31G* level, and we successively calculated 10 low-lying excited states of over 2 million molecules via time-dependent DFT with the B3LYP functional and the 6-31+G* basis set. To select the molecules calculated in our project, we referred to the PubChem Project, which was used as the source of the molecular structures in short strings using the InChI and SMILES representations. Accordingly, we have named our quantum chemistry database project "PubChemQC" ( http://pubchemqc.riken.jp/ ) and placed it in the public domain. In this paper, we show the fundamental features of the PubChemQC database and discuss the techniques used to construct the data set for large-scale quantum chemistry calculations. We also present a machine learning approach to predict the electronic structure of molecules as an example to demonstrate the suitability of the large-scale quantum chemistry database.

  7. Magnetic storm generation by large-scale complex structure Sheath/ICME

    Science.gov (United States)

    Grigorenko, E. E.; Yermolaev, Y. I.; Lodkina, I. G.; Yermolaev, M. Y.; Riazantseva, M.; Borodkova, N. L.

    2017-12-01

    We study temporal profiles of interplanetary plasma and magnetic field parameters as well as magnetospheric indices. We use our catalog of large-scale solar wind phenomena for 1976-2000 interval (see the catalog for 1976-2016 in web-side ftp://ftp.iki.rssi.ru/pub/omni/ prepared on basis of OMNI database (Yermolaev et al., 2009)) and the double superposed epoch analysis method (Yermolaev et al., 2010). Our analysis showed (Yermolaev et al., 2015) that average profiles of Dst and Dst* indices decrease in Sheath interval (magnetic storm activity increases) and increase in ICME interval. This profile coincides with inverted distribution of storm numbers in both intervals (Yermolaev et al., 2017). This behavior is explained by following reasons. (1) IMF magnitude in Sheath is higher than in Ejecta and closed to value in MC. (2) Sheath has 1.5 higher efficiency of storm generation than ICME (Nikolaeva et al., 2015). The most part of so-called CME-induced storms are really Sheath-induced storms and this fact should be taken into account during Space Weather prediction. The work was in part supported by the Russian Science Foundation, grant 16-12-10062. References. 1. Nikolaeva N.S., Y. I. Yermolaev and I. G. Lodkina (2015), Modeling of the corrected Dst* index temporal profile on the main phase of the magnetic storms generated by different types of solar wind, Cosmic Res., 53(2), 119-127 2. Yermolaev Yu. I., N. S. Nikolaeva, I. G. Lodkina and M. Yu. Yermolaev (2009), Catalog of Large-Scale Solar Wind Phenomena during 1976-2000, Cosmic Res., , 47(2), 81-94 3. Yermolaev, Y. I., N. S. Nikolaeva, I. G. Lodkina, and M. Y. Yermolaev (2010), Specific interplanetary conditions for CIR-induced, Sheath-induced, and ICME-induced geomagnetic storms obtained by double superposed epoch analysis, Ann. Geophys., 28, 2177-2186 4. Yermolaev Yu. I., I. G. Lodkina, N. S. Nikolaeva and M. Yu. Yermolaev (2015), Dynamics of large-scale solar wind streams obtained by the double superposed epoch

  8. Structural problems of public participation in large-scale projects with environmental impact

    International Nuclear Information System (INIS)

    Bechmann, G.

    1989-01-01

    Four items are discussed showing that the problems involved through participation of the public in large-scale projects with environmental impact cannot be solved satisfactorily without suitable modification of the existing legal framework. The problematic items are: the status of the electric utilities as a quasi public enterprise; informal preliminary negotiations; the penetration of scientific argumentation into administrative decisions; the procedural concept. The paper discusses the fundamental issue of the problem-adequate design of the procedure and develops suggestions for a cooperative participation design. (orig./HSCH) [de

  9. Icing Simulation Research Supporting the Ice-Accretion Testing of Large-Scale Swept-Wing Models

    Science.gov (United States)

    Yadlin, Yoram; Monnig, Jaime T.; Malone, Adam M.; Paul, Bernard P.

    2018-01-01

    The work summarized in this report is a continuation of NASA's Large-Scale, Swept-Wing Test Articles Fabrication; Research and Test Support for NASA IRT contract (NNC10BA05 -NNC14TA36T) performed by Boeing under the NASA Research and Technology for Aerospace Propulsion Systems (RTAPS) contract. In the study conducted under RTAPS, a series of icing tests in the Icing Research Tunnel (IRT) have been conducted to characterize ice formations on large-scale swept wings representative of modern commercial transport airplanes. The outcome of that campaign was a large database of ice-accretion geometries that can be used for subsequent aerodynamic evaluation in other experimental facilities and for validation of ice-accretion prediction codes.

  10. Towards large-scale mapping of urban three-dimensional structure using Landsat imagery and global elevation datasets

    Science.gov (United States)

    Wang, P.; Huang, C.

    2017-12-01

    The three-dimensional (3D) structure of buildings and infrastructures is fundamental to understanding and modelling of the impacts and challenges of urbanization in terms of energy use, carbon emissions, and earthquake vulnerabilities. However, spatially detailed maps of urban 3D structure have been scarce, particularly in fast-changing developing countries. We present here a novel methodology to map the volume of buildings and infrastructures at 30 meter resolution using a synergy of Landsat imagery and openly available global digital surface models (DSMs), including the Shuttle Radar Topography Mission (SRTM), ASTER Global Digital Elevation Map (GDEM), ALOS World 3D - 30m (AW3D30), and the recently released global DSM from the TanDEM-X mission. Our method builds on the concept of object-based height profile to extract height metrics from the DSMs and use a machine learning algorithm to predict height and volume from the height metrics. We have tested this algorithm in the entire England and assessed our result using Lidar measurements in 25 England cities. Our initial assessments achieved a RMSE of 1.4 m (R2 = 0.72) for building height and a RMSE of 1208.7 m3 (R2 = 0.69) for building volume, demonstrating the potential of large-scale applications and fully automated mapping of urban structure.

  11. Comparison of prestellar core elongations and large-scale molecular cloud structures in the Lupus I region

    Energy Technology Data Exchange (ETDEWEB)

    Poidevin, Frédérick [UCL, KLB, Department of Physics and Astronomy, Gower Place, London WC1E 6BT (United Kingdom); Ade, Peter A. R.; Hargrave, Peter C.; Nutter, David [School of Physics and Astronomy, Cardiff University, Queens Buildings, The Parade, Cardiff CF24 3AA (United Kingdom); Angile, Francesco E.; Devlin, Mark J.; Klein, Jeffrey [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Benton, Steven J.; Netterfield, Calvin B. [Department of Physics, University of Toronto, 60 St. George Street, Toronto, ON M5S 1A7 (Canada); Chapin, Edward L. [XMM SOC, ESAC, Apartado 78, E-28691 Villanueva de la Canãda, Madrid (Spain); Fissel, Laura M.; Gandilo, Natalie N. [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4 (Canada); Fukui, Yasuo [Department of Physics, Nagoya University, Chikusa-ku, Nagoya, Aichi 464-8601 (Japan); Gundersen, Joshua O. [Department of Physics, University of Miami, 1320 Campo Sano Drive, Coral Gables, FL 33146 (United States); Korotkov, Andrei L. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Matthews, Tristan G.; Novak, Giles [Department of Physics and Astronomy, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208 (United States); Moncelsi, Lorenzo; Mroczkowski, Tony K. [California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Olmi, Luca, E-mail: fpoidevin@iac.es [Physics Department, University of Puerto Rico, Rio Piedras Campus, Box 23343, UPR station, San Juan, PR 00931 (United States); and others

    2014-08-10

    Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of the morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.

  12. Large-scale trends in the evolution of gene structures within 11 animal genomes.

    Directory of Open Access Journals (Sweden)

    Mark Yandell

    2006-03-01

    Full Text Available We have used the annotations of six animal genomes (Homo sapiens, Mus musculus, Ciona intestinalis, Drosophila melanogaster, Anopheles gambiae, and Caenorhabditis elegans together with the sequences of five unannotated Drosophila genomes to survey changes in protein sequence and gene structure over a variety of timescales--from the less than 5 million years since the divergence of D. simulans and D. melanogaster to the more than 500 million years that have elapsed since the Cambrian explosion. To do so, we have developed a new open-source software library called CGL (for "Comparative Genomics Library". Our results demonstrate that change in intron-exon structure is gradual, clock-like, and largely independent of coding-sequence evolution. This means that genome annotations can be used in new ways to inform, corroborate, and test conclusions drawn from comparative genomics analyses that are based upon protein and nucleotide sequence similarities.

  13. Designing a large scale combined pumping and tracer test in a fracture zone at Palmottu, Finland

    International Nuclear Information System (INIS)

    Gustafsson, E.; Nordqvist, R.; Korkealaakso, J.; Galarza, G.

    1997-01-01

    The Palmottu Natural Analogue Project in Finland continued as an EC-supported international analogue project in 1996, in order to study radionuclide migration in a natural uranium-rich environment. The site is located in an area of crystalline bedrock, characterized by granites and metamorphic rocks. The uranium deposit extends from the surface to a depth of more than 300 m, and have a thickness of up to 15 m. An overall aim of the project is to increase knowledge of factors affecting mobilization and retardation of uranium in crystalline bedrock. One of the important tasks within the project is to characterize the major flow paths for the groundwater, i.e. important hydraulic features, around the orebody. A planned experiment in one such feature, a sub-horizontal fracture zone which cross-cuts the uranium mineralization. The objectives of the planned combined pumping and tracer test is to verify and further up-date the present hydro-structural model around the central part of the mineralization, increase the current understanding about the hydraulic and solute transport properties of the sub-horizontal fracture zone, as well as to verify and further characterize its hydraulic boundaries. (author)

  14. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    Science.gov (United States)

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  15. Large-scale, multi-compartment tests in PANDA for LWR-containment analysis and code validation

    International Nuclear Information System (INIS)

    Paladino, Domenico; Auban, Olivier; Zboray, Robert

    2006-01-01

    The large-scale thermal-hydraulic PANDA facility has been used for the last years for investigating passive decay heat removal systems and related containment phenomena relevant for next-generation and current light water reactors. As part of the 5. EURATOM framework program project TEMPEST, a series of tests was performed in PANDA to experimentally investigate the distribution of hydrogen inside the containment and its effect on the performance of the Passive Containment Cooling System (PCCS) designed for the Economic Simplified Boiling Water Reactor (ESBWR). In a postulated severe accident, a large amount of hydrogen could be released in the Reactor Pressure Vessel (RPV) as a consequence of the cladding Metal- Water (M-W) reaction and discharged together with steam to the Drywell (DW) compartment. In PANDA tests, hydrogen was simulated by using helium. This paper illustrates the results of a TEMPEST test performed in PANDA and named as Test T1.2. In Test T1.2, the gas stratification (steam-helium) patterns forming in the large-scale multi-compartment PANDA DW, and the effect of non-condensable gas (helium) on the overall behaviour of the PCCS were identified. Gas mixing and stratification in a large-scale multi-compartment system are currently being further investigated in PANDA in the frame of the OECD project SETH. The testing philosophy in this new PANDA program is to produce data for code validation in relation to specific phenomena, such as: gas stratification in the containment, gas transport between containment compartments, wall condensation, etc. These types of phenomena are driven by buoyant high-momentum injections (jets) and/or low momentum injection (plumes), depending on the transient scenario. In this context, the new SETH tests in PANDA are particularly valuable to produce an experimental database for code assessment. This paper also presents an overview of the PANDA SETH tests and the major improvements in instrumentation carried out in the PANDA

  16. Study of Large-Scale Wave Structure and Development of Equatorial Plasma Bubbles Using the C/NOFS Satellite

    Science.gov (United States)

    2012-10-31

    scientific journals. The papers are listed below in chronological order. Kelley, M.C., F.S. Rodrigues, J.J. Makela, R. Tsunoda, P.A. Roddy, D.E. Hunton...source region be located on the dip equator. To illustrate, Figure 6 presents a sequence of satellite OLR maps, which were taken over Peru on 19-20...to large-scale wave structure and equatorial spread F, presented at the International Symposium for Equatorial Aeronomy, Paracas, Peru , March 2012

  17. Development of electric road vehicles in France. Political measures, large-scale tests, and strategy of PSA Peugeot Citroen

    International Nuclear Information System (INIS)

    Beau, J.C.

    1993-01-01

    France offers particularly favourable conditions for the further development and the market introduction of electric vehicles: On account of the electricity production with almost no exhaust emission and due to the concentrated population structure stemming from the historical background in densely populated historical towns up to the innovational, electrochemical and electrotechnical industries and last but not least the automotive industry itself. The article is structured as follows: A) Political measures, large scale experiments in France; B) Strategy of PSA Peugeot Citroen; C) Activities by Peugeot in Germany. (orig.) [de

  18. Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations

    International Nuclear Information System (INIS)

    Lonsdale, C.J.; Hacking, P.B.

    1989-01-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs

  19. Flat tree-level inflationary potentials in the light of cosmic microwave background and large scale structure data

    CERN Document Server

    Ballesteros, G; Espinosa, J R; de Austri, R Ruiz; Trotta, R

    2008-01-01

    We use cosmic microwave background and large scale structure data to test a broad and physically well-motivated class of inflationary models: those with flat tree-level potentials (typical in supersymmetry). The non-trivial features of the potential arise from radiative corrections which give a simple logarithmic dependence on the inflaton field, making the models very predictive. We also consider a modified scenario with new physics beyond a certain high-energy cut-off showing up as non-renormalizable operators (NRO) in the inflaton field. We find that both kinds of models fit remarkably well CMB and LSS data, with very few free parameters. Besides, a large part of these models naturally predict a reasonable number of e-folds. A robust feature of these scenarios is the smallness of tensor perturbations (r < 10^{-3}). The NRO case can give a sizeable running of the spectral index while achieving a sufficient number of e-folds. We use Bayesian model comparison tools to assess the relative performance of the...

  20. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    Directory of Open Access Journals (Sweden)

    E. H. Sutanudjaja

    2011-09-01

    Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  1. Large-scale academic achievement testing of deaf and hard-of-hearing students: past, present, and future.

    Science.gov (United States)

    Qi, Sen; Mitchell, Ross E

    2012-01-01

    The first large-scale, nationwide academic achievement testing program using Stanford Achievement Test (Stanford) for deaf and hard-of-hearing children in the United States started in 1969. Over the past three decades, the Stanford has served as a benchmark in the field of deaf education for assessing student academic achievement. However, the validity and reliability of using the Stanford for this special student population still require extensive scrutiny. Recent shifts in educational policy environment, which require that schools enable all children to achieve proficiency through accountability testing, warrants a close examination of the adequacy and relevance of the current large-scale testing of deaf and hard-of-hearing students. This study has three objectives: (a) it will summarize the historical data over the last three decades to indicate trends in academic achievement for this special population, (b) it will analyze the current federal laws and regulations related to educational testing and special education, thereby identifying gaps between policy and practice in the field, especially identifying the limitations of current testing programs in assessing what deaf and hard-of-hearing students know, and (c) it will offer some insights and suggestions for future testing programs for deaf and hard-of-hearing students.

  2. Large-Scale Testing and High-Fidelity Simulation Capabilities at Sandia National Laboratories to Support Space Power and Propulsion

    International Nuclear Information System (INIS)

    Dobranich, Dean; Blanchat, Thomas K.

    2008-01-01

    Sandia National Laboratories, as a Department of Energy, National Nuclear Security Agency, has major responsibility to ensure the safety and security needs of nuclear weapons. As such, with an experienced research staff, Sandia maintains a spectrum of modeling and simulation capabilities integrated with experimental and large-scale test capabilities. This expertise and these capabilities offer considerable resources for addressing issues of interest to the space power and propulsion communities. This paper presents Sandia's capability to perform thermal qualification (analysis, test, modeling and simulation) using a representative weapon system as an example demonstrating the potential to support NASA's Lunar Reactor System

  3. Hierarchical, decentralized control system for large-scale smart-structures

    International Nuclear Information System (INIS)

    Algermissen, Stephan; Fröhlich, Tim; Monner, Hans Peter

    2014-01-01

    Active control of sound and vibration has gained much attention in all kinds of industries in the past decade. Future prospects for maximizing airline passenger comfort are especially promising. The objectives of recent research projects in this area are the reduction of noise transmission through thin walled structures such as fuselages, linings or interior elements. Besides different external noise sources, such as the turbulent boundary layer, rotor or jet noise, the actuator and sensor placement as well as different control concepts are addressed. Mostly, the work is focused on a single panel or section of the fuselage, neglecting the fact that for effective noise reduction the entire fuselage has to be taken into account. Nevertheless, extending the scope of an active system from a single panel to the entire fuselage increases the effort for control hardware dramatically. This paper presents a control concept for large structures using distributed control nodes. Each node has the capability to execute a vibration or noise controller for a specific part or section of the fuselage. For maintenance, controller tuning or performance measurement, all nodes are connected to a host computer via Universal Serial Bus (USB). This topology allows a partitioning and distributing of tasks. The nodes execute the low-level control functions. High-level tasks like maintenance, system identification and control synthesis are operated by the host using streamed data from the nodes. By choosing low-price nodes, a very cost effective way of implementing an active system for large structures is realized. Besides the system identification and controller synthesis on the host computer, a detailed view on the hardware and software concept for the nodes is given. Finally, the results of an experimental test of a system running a robust vibration controller at an active panel demonstrator are shown. (paper)

  4. Multiple Skills Underlie Arithmetic Performance: A Large-Scale Structural Equation Modeling Analysis

    Directory of Open Access Journals (Sweden)

    Sarit Ashkenazi

    2017-12-01

    Full Text Available Current theoretical approaches point to the importance of several cognitive skills not specific to mathematics for the etiology of mathematics disorders (MD. In the current study, we examined the role of many of these skills, specifically: rapid automatized naming, attention, reading, and visual perception, on mathematics performance among a large group of college students (N = 1,322 with a wide range of arithmetic proficiency. Using factor analysis, we discovered that our data clustered to four latent variables 1 mathematics, 2 perception speed, 3 attention and 4 reading. In subsequent structural equation modeling, we found that the latent variable perception speed had a strong and meaningful effect on mathematics performance. Moreover, sustained attention, independent from the effect of the latent variable perception speed, had a meaningful, direct effect on arithmetic fact retrieval and procedural knowledge. The latent variable reading had a modest effect on mathematics performance. Specifically, reading comprehension, independent from the effect of the latent variable reading, had a meaningful direct effect on mathematics, and particularly on number line knowledge. Attention, tested by the attention network test, had no effect on mathematics, reading or perception speed. These results indicate that multiple factors can affect mathematics performance supporting a heterogeneous approach to mathematics. These results have meaningful implications for the diagnosis and intervention of pure and comorbid learning disorders.

  5. Micrometer scale guidance of mesenchymal stem cells to form structurally oriented large-scale tissue engineered cartilage.

    Science.gov (United States)

    Chou, Chih-Ling; Rivera, Alexander L; Williams, Valencia; Welter, Jean F; Mansour, Joseph M; Drazba, Judith A; Sakai, Takao; Baskaran, Harihara

    2017-09-15

    Current clinical methods to treat articular cartilage lesions provide temporary relief of the symptoms but fail to permanently restore the damaged tissue. Tissue engineering, using mesenchymal stem cells (MSCs) combined with scaffolds and bioactive factors, is viewed as a promising method for repairing cartilage injuries. However, current tissue engineered constructs display inferior mechanical properties compared to native articular cartilage, which could be attributed to the lack of structural organization of the extracellular matrix (ECM) of these engineered constructs in comparison to the highly oriented structure of articular cartilage ECM. We previously showed that we can guide MSCs undergoing chondrogenesis to align using microscale guidance channels on the surface of a two-dimensional (2-D) collagen scaffold, which resulted in the deposition of aligned ECM within the channels and enhanced mechanical properties of the constructs. In this study, we developed a technique to roll 2-D collagen scaffolds containing MSCs within guidance channels in order to produce a large-scale, three-dimensional (3-D) tissue engineered cartilage constructs with enhanced mechanical properties compared to current constructs. After rolling the MSC-scaffold constructs into a 3-D cylindrical structure, the constructs were cultured for 21days under chondrogenic culture conditions. The microstructure architecture and mechanical properties of the constructs were evaluated using imaging and compressive testing. Histology and immunohistochemistry of the constructs showed extensive glycosaminoglycan (GAG) and collagen type II deposition. Second harmonic generation imaging and Picrosirius red staining indicated alignment of neo-collagen fibers within the guidance channels of the constructs. Mechanical testing indicated that constructs containing the guidance channels displayed enhanced compressive properties compared to control constructs without these channels. In conclusion, using a novel

  6. Development and Execution of a Large-scale DDT Tube Test for IHE Material Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Parker, Gary Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Broilo, Robert M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lopez-Pulliam, Ian Daniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vaughan, Larry Dean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-24

    Insensitive High Explosive (IHE) Materials are defined in Chapter IX of the DOE Explosive Safety Standard (DOE-STD-1212-2012) as being materials that are massdetonable explosives that are so insensitive that the probability of accidental initiation or transition from burning to detonation is negligible1. There are currently a number of tests included in the standard that are required to qualify a material as IHE, however, none of the tests directly evaluate for the transition from burning to detonation (aka deflagration-to-detonation transition, DDT). Currently, there is a DOE complex-wide effort to revisit the IHE definition in DOE-STD-1212-2012 and change the qualification requirements. The proposal lays out a new approach, requiring fewer, but more appropriate tests, for IHE Material qualification. One of these new tests is the Deflagration-to-Detonation Test. According to the redefinition proposal, the purpose of the new deflagration-todetonation test is “to demonstrate that an IHE material will not undergo deflagration-to-detonation under stockpile relevant conditions of scale, confinement, and material condition. Inherent in this test design is the assumption that ignition does occur, with onset of deflagration. The test design will incorporate large margins and replicates to account for the stochastic nature of DDT events.” In short, the philosophy behind this approach is that if a material fails to undergo DDT in a significant over-test, then it is extremely unlikely to do so in realistic conditions. This effort will be valuable for the B61 LEP to satisfy their need qualify the new production lots of PBX 9502. The work described in this report is intended as a preliminary investigation to support the proposed design of an overly conservative, easily fielded DDT test for updated IHE Material Qualification standard. Specifically, we evaluated the aspects of confinement, geometry, material morphology and temperature. We also developed and tested a

  7. PISA - An Example of the Use and Misuse of Large-Scale Comparative Tests

    DEFF Research Database (Denmark)

    Dolin, Jens

    2007-01-01

    The article will analyse PISA - particularly the part dealing with science - as an example of a major comparative evaluation. PISA will first be described and then analysed on the basis of test theory, which will address some detailed technical aspects of the test as well as the broader issue...

  8. An engineering methodology for implementing and testing VLSI (Very Large Scale Integrated) circuits

    Science.gov (United States)

    Corliss, Walter F., II

    1989-03-01

    The engineering methodology for producing a fully tested VLSI chip from a design layout is presented. A 16-bit correlator, NPS CORN88, that was previously designed, was used as a vehicle to demonstrate this methodology. The study of the design and simulation tools, MAGIC and MOSSIM II, was the focus of the design and validation process. The design was then implemented and the chip was fabricated by MOSIS. This fabricated chip was then used to develop a testing methodology for using the digital test facilities at NPS. NPS CORN88 was the first full custom VLSI chip, designed at NPS, to be tested with the NPS digital analysis system, Tektronix DAS 9100 series tester. The capabilities and limitations of these test facilities are examined. NPS CORN88 test results are included to demonstrate the capabilities of the digital test system. A translator, MOS2DAS, was developed to convert the MOSSIM II simulation program to the input files required by the DAS 9100 device verification software, 91DVS. Finally, a tutorial for using the digital test facilities, including the DAS 9100 and associated support equipments, is included as an appendix.

  9. Digital Image Correlation Techniques Applied to Large Scale Rocket Engine Testing

    Science.gov (United States)

    Gradl, Paul R.

    2016-01-01

    Rocket engine hot-fire ground testing is necessary to understand component performance, reliability and engine system interactions during development. The J-2X upper stage engine completed a series of developmental hot-fire tests that derived performance of the engine and components, validated analytical models and provided the necessary data to identify where design changes, process improvements and technology development were needed. The J-2X development engines were heavily instrumented to provide the data necessary to support these activities which enabled the team to investigate any anomalies experienced during the test program. This paper describes the development of an optical digital image correlation technique to augment the data provided by traditional strain gauges which are prone to debonding at elevated temperatures and limited to localized measurements. The feasibility of this optical measurement system was demonstrated during full scale hot-fire testing of J-2X, during which a digital image correlation system, incorporating a pair of high speed cameras to measure three-dimensional, real-time displacements and strains was installed and operated under the extreme environments present on the test stand. The camera and facility setup, pre-test calibrations, data collection, hot-fire test data collection and post-test analysis and results are presented in this paper.

  10. Using Raters from India to Score a Large-Scale Speaking Test

    Science.gov (United States)

    Xi, Xiaoming; Mollaun, Pam

    2011-01-01

    We investigated the scoring of the Speaking section of the Test of English as a Foreign Language[TM] Internet-based (TOEFL iBT[R]) test by speakers of English and one or more Indian languages. We explored the extent to which raters from India, after being trained and certified, were able to score the TOEFL examinees with mixed first languages…

  11. DYNAMIC TENSILE TESTING WITH A LARGE SCALE 33 MJ ROTATING DISK IMPACT MACHINE

    OpenAIRE

    Kussmaul , K.; Zimmermann , C.; Issler , W.

    1985-01-01

    A recently completed testing machine for dynamic tensile tests is described. The machine consists essentially of a pendulum which holds the specimen and a large steel disk with a double striking nose fixed to its circumference. Disk diameter measures 2000 mm, while its mass is 6400 kg. The specimens to be tested are tensile specimens with a diameter of up to 20 mm and 300 mm length or CT 15 specimens at various temperatures. Loading velocity ranges from 1 to 150 m/s. The process of specimen-n...

  12. Large-scale in situ heater tests for hydrothermal characterization at Yucca Mountain

    International Nuclear Information System (INIS)

    Buscheck, T.A.; Wilder, D.G.; Nitao, J.J.

    1993-01-01

    To safely and permanently store high-level nuclear-waste, the potential Yucca Mountain repository site must mitigate the release and transport of radionuclides for tens of thousands of years. In the failure scenario of greatest concern, water would contact a waste package, accelerate its failure rate, and eventually transport radionuclides to the water table. Our analysis indicate that the ambient hydrological system will be dominated by repository-heat-driven hydrothermal flow for tens of thousands of years. In situ heater tests are required to provide an understanding of coupled geomechanical-hydrothermal-geochemical behavior in the engineered and natural barriers under repository thermal loading conditions. In situ heater tests have been included in the Site Characterization Plan in response to regulatory requirements for site characterization and to support the validation of process models required to assess the total systems performance at the site. The success of the License Application (LA) hinges largely on how effectively we validate the process models that provide the basis for performance assessment. Because of limited time, some of the in situ tests will have to be accelerated relative to actual thermal loading conditions. We examine the trade-offs between the limited test duration and generating hydrothermal conditions applicable to repository performance during the entire thermal loading cycle, including heating (boiling and dry-out) and cooldown (re-wetting). For in situ heater tests duration of 6-7 yr (including 4 yr of full-power heating) is required. The parallel use of highly accelerated, shorter-duration tests may provide timely information for the LA, provided that the applicability of the test results can be validated against ongoing nominal-rate heater tests

  13. Large-scale in situ heater tests for hydrothermal characterization at Yucca Mountain

    International Nuclear Information System (INIS)

    Buscheck, T.A.; Wilder, D.G.; Nitao, J.J.

    1993-01-01

    To safely and permanently store high-level nuclear waste, the potential Yucca Mountain repository site must mitigate the release and transport of radionuclides for tens of thousands of years. In the failure scenario of greatest concern, water would contact a waste package, accelerate its failure rate, and eventually transport radionuclides to the water table. Our analyses indicate that the ambient hydrological system will be dominated by repository-heat-driven hydrothermal flow for tens of thousands of years. In situ heater tests are required to provide an understanding of coupled geomechanical-hydrothermal-geochemical behavior in the engineered and natural barriers under repository thermal loading conditions. In situ heater tests have been included in the Site Characterization Plan in response to regulatory requirements for site characterization and to support the validation of process models required to assess the total systems performance at the site. Because of limited time, some of the in situ tests will have to be accelerated relative to actual thermal loading conditions. We examine the trade-offs between the limited test duration and generating hydrothermal conditions applicable to repository performance during the entire thermal loading cycle, including heating (boiling and dry-out) and cooldown (re-wetting). For in situ heater tests to be applicable to actual repository conditions, a minimum heater test duration of 6-7 yr (including 4 yr of full-power heating) is required

  14. Analysis and experimental validation of through-thickness cracked large-scale biaxial fracture tests

    International Nuclear Information System (INIS)

    Wiesner, C.S.; Goldthorpe, M.R.; Andrews, R.M.; Garwood, S.J.

    1999-01-01

    Since 1984 TWI has been involved in an extensive series of tests investigating the effects of biaxial loading on the fracture behaviour of A533B steel. Testing conditions have ranged from the lower to upper shelf regions of the transition curve and covered a range of biaxiality ratios. In an attempt to elucidate the trends underlying the experimental results, finite element-based mechanistic models were used to analyse the effects of biaxial loading. For ductile fracture, a modified Gunson model was used and important effects on tearing behaviour were found for through thickness cracked wide plates, as observed in upper shelf tests. For cleavage fracture, both simple T-stress methods and the Anderson-Dodds and Beremin models were used. Whilst the effect of biaxiality on surface cracked plates was small, a marked effect of biaxial loading was found for the through-thickness crack. To further validate the numerical predictions for cleavage fracture, TWI have performed an additional series of lower shelf through thickness cracked biaxial wide plate fracture tests. These tests were performed using various biaxiality loading conditions varying from simple uniaxial loading, through equibiaxial loading, to a biaxiality ratio equivalent to a circumferential crack in a pressure vessel. These tests confirmed the predictions that there is a significant effect of biaxial loading on cleavage fracture of through thickness cracked plate. (orig.)

  15. A fast multilocus test with adaptive SNP selection for large-scale genetic-association studies

    KAUST Repository

    Zhang, Han

    2013-09-11

    As increasing evidence suggests that multiple correlated genetic variants could jointly influence the outcome, a multilocus test that aggregates association evidence across multiple genetic markers in a considered gene or a genomic region may be more powerful than a single-marker test for detecting susceptibility loci. We propose a multilocus test, AdaJoint, which adopts a variable selection procedure to identify a subset of genetic markers that jointly show the strongest association signal, and defines the test statistic based on the selected genetic markers. The P-value from the AdaJoint test is evaluated by a computationally efficient algorithm that effectively adjusts for multiple-comparison, and is hundreds of times faster than the standard permutation method. Simulation studies demonstrate that AdaJoint has the most robust performance among several commonly used multilocus tests. We perform multilocus analysis of over 26,000 genes/regions on two genome-wide association studies of pancreatic cancer. Compared with its competitors, AdaJoint identifies a much stronger association between the gene CLPTM1L and pancreatic cancer risk (6.0 × 10(-8)), with the signal optimally captured by two correlated single-nucleotide polymorphisms (SNPs). Finally, we show AdaJoint as a powerful tool for mapping cis-regulating methylation quantitative trait loci on normal breast tissues, and find many CpG sites whose methylation levels are jointly regulated by multiple SNPs nearby.

  16. Scramjet test flow reconstruction for a large-scale expansion tube, Part 2: axisymmetric CFD analysis

    Science.gov (United States)

    Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.

    2017-11-01

    This paper presents the second part of a study aiming to accurately characterise a Mach 10 scramjet test flow generated using a large free-piston-driven expansion tube. Part 1 described the experimental set-up, the quasi-one-dimensional simulation of the full facility, and the hybrid analysis technique used to compute the nozzle exit test flow properties. The second stage of the hybrid analysis applies the computed 1-D shock tube flow history as an inflow to a high-fidelity two-dimensional-axisymmetric analysis of the acceleration tube. The acceleration tube exit flow history is then applied as an inflow to a further refined axisymmetric nozzle model, providing the final nozzle exit test flow properties and thereby completing the analysis. This paper presents the results of the axisymmetric analyses. These simulations are shown to closely reproduce experimentally measured shock speeds and acceleration tube static pressure histories, as well as nozzle centreline static and impact pressure histories. The hybrid scheme less successfully predicts the diameter of the core test flow; however, this property is readily measured through experimental pitot surveys. In combination, the full test flow history can be accurately determined.

  17. Determination of soil liquefaction characteristics by large-scale laboratory tests

    International Nuclear Information System (INIS)

    1975-05-01

    The testing program described in this report was carried out to study the liquefaction behavior of a clean, uniform, medium sand. Horizontal beds of this sand, 42 inches by 90 inches by 4 inches were prepared by pluviation with a special sand spreader, saturated, and tested in a shaking table system designed for this program, which applied a horizontal cyclic shear stress to the specimens. Specimen size was selected to reduce boundary effects as much as possible. Values of pore pressures and shear strains developed during the tests are presented for sand specimens at relative densities of 54, 68, 82, and 90 percent, and the results interpreted to determine the values of the stress ratio causing liquefaction at the various relative densities

  18. Liquid Methane Testing With a Large-Scale Spray Bar Thermodynamic Vent System

    Science.gov (United States)

    Hastings, L. J.; Bolshinskiy, L. G.; Hedayat, A.; Flachbart, R. H.; Sisco, J. D.; Schnell. A. R.

    2014-01-01

    NASA's Marshall Space Flight Center conducted liquid methane testing in November 2006 using the multipurpose hydrogen test bed outfitted with a spray bar thermodynamic vent system (TVS). The basic objective was to identify any unusual or unique thermodynamic characteristics associated with densified methane that should be considered in the design of space-based TVSs. Thirteen days of testing were performed with total tank heat loads ranging from 720 to 420 W at a fill level of approximately 90%. It was noted that as the fluid passed through the Joule-Thompson expansion, thermodynamic conditions consistent with the pervasive presence of metastability were indicated. This Technical Publication describes conditions that correspond with metastability and its detrimental effects on TVS performance. The observed conditions were primarily functions of methane densification and helium pressurization; therefore, assurance must be provided that metastable conditions have been circumvented in future applications of thermodynamic venting to in-space methane storage.

  19. Large-scale association study for structural soundness and leg locomotion traits in the pig

    Directory of Open Access Journals (Sweden)

    Serenius Timo

    2009-01-01

    Full Text Available Abstract Background Identification and culling of replacement gilts with poor skeletal conformation and feet and leg (FL unsoundness is an approach used to reduce sow culling and mortality rates in breeding stock. Few candidate genes related to soundness traits have been identified in the pig. Methods In this study, 2066 commercial females were scored for 17 traits describing body conformation and FL structure, and were used for association analyses. Genotyping of 121 SNPs derived from 95 genes was implemented using Sequenom's MassARRAY system. Results Based on the association results from single trait and principal components using mixed linear model analyses and false discovery rate testing, it was observed that APOE, BMP8, CALCR, COL1A2, COL9A1, DKFZ, FBN1 and VDBP were very highly significantly (P ALOX5, BMP8, CALCR, OPG, OXTR and WNT16 were very highly significantly (P APOE, CALCR, COL1A2, GNRHR, IHH, MTHFR and WNT16 were highly significantly (P CALCR and COL1A2 on SSC9 was detected, and haplotype -ACGACC- was highly significantly (P Conclusion The present findings provide a comprehensive list of candidate genes for further use in fine mapping and biological functional analyses.

  20. Neutrino Physics from the Cosmic Microwave Background and Large Scale Structure

    International Nuclear Information System (INIS)

    Abazajian, K. N.; Bischoff, C.; Bock, J.; Carvalho, C. S.; Chiang, H. C.; Dawson, K. S.; Halverson, N. W.; Hubmayr, J.; Knox, L.; Kuo, C.-L.; Linder, E.; Lubin, P.; Smith, K. M.; Spergel, D.; Stompor, R.; Vieregg, A. G.; Wang, G.; Wu, W.; Yoon, K. W.; Zahn, O.

    2014-01-01

    This is a report on the status and prospects of the quantification of neutrino properties through the cosmological neutrino background for the Cosmic Frontier of the Division of Particles and Fields Community Summer Study long-term planning exercise. Experiments planned and underway are prepared to study the cosmological neutrino background in detail via its influence on distance-redshift relations and the growth of structure. The program for the next decade described in this document, including upcoming spectroscopic galaxy surveys eBOSS and DESI and a new Stage-IV CMB polarization experiment CMB-S4, will achieve σ(σmν) = 16 meV and σ(N eff ) = 0.020. Such a mass measurement will produce a high significance detection of non-zero σmν, whose lower bound derived from atmospheric and solar neutrino oscillation data is about 58 meV. If neutrinos have a minimal normal mass hierarchy, this measurement will definitively rule out the inverted neutrino mass hierarchy, shedding light on one of the most puzzling aspects of the Standard Model of particle physics - the origin of mass. This precise a measurement of N eff will allow for high sensitivity to any light and dark degrees of freedom produced in the big bang and a precision test of the standard cosmological model prediction that N eff = 3.046

  1. Acquisition and preparation of specimens of rock for large-scale testing

    International Nuclear Information System (INIS)

    Watkins, D.J.

    1981-01-01

    The techniques used for acquisition and preparation of large specimens of rock for laboratory testing depend upon the location of the specimen, the type of rock and the equipment available at the sampling site. Examples are presented to illustrate sampling and preparation techniques used for two large cylindrical samples of granitic material, one pervasively fractured and one containing a single fracture

  2. Verification of the analytical fracture assessments methods by a large scale pressure vessel test

    Energy Technology Data Exchange (ETDEWEB)

    Keinanen, H; Oberg, T; Rintamaa, R; Wallin, K

    1988-12-31

    This document deals with the use of fracture mechanics for the assessment of reactor pressure vessel. Tests have been carried out to verify the analytical fracture assessment methods. The analysis is focused on flaw dimensions and the scatter band of material characteristics. Results are provided and are compared to experimental ones. (TEC).

  3. Analysis of recorded earthquake response data at the Hualien large-scale seismic test site

    International Nuclear Information System (INIS)

    Hyun, C.H.; Tang, H.T.; Dermitzakis, S.; Esfandiari, S.

    1997-01-01

    A soil-structure interaction (SSI) experiment is being conducted in a seismically active region in Hualien, Taiwan. To obtain earthquake data for quantifying SSI effects and providing a basis to benchmark analysis methods, a 1/4-th scale cylindrical concrete containment model similar in shape to that of a nuclear power plant containment was constructed in the field where both the containment model and its surrounding soil, surface and sub-surface, are extensively instrumented to record earthquake data. In between September 1993 and May 1995, eight earthquakes with Richter magnitudes ranging from 4.2 to 6.2 were recorded. The author focuses on studying and analyzing the recorded data to provide information on the response characteristics of the Hualien soil-structure system, the SSI effects and the ground motion characteristics. An effort was also made to directly determine the site soil physical properties based on correlation analysis of the recorded data. No modeling simulations were attempted to try to analytically predict the SSI response of the soil and the structure. These will be the scope of a subsequent study

  4. A new method of presentation the large-scale magnetic field structure on the Sun and solar corona

    Science.gov (United States)

    Ponyavin, D. I.

    1995-01-01

    The large-scale photospheric magnetic field, measured at Stanford, has been analyzed in terms of surface harmonics. Changes of the photospheric field which occur within whole solar rotation period can be resolved by this analysis. For this reason we used daily magnetograms of the line-of-sight magnetic field component observed from Earth over solar disc. We have estimated the period during which day-to-day full disc magnetograms must be collected. An original algorithm was applied to resolve time variations of spherical harmonics that reflect time evolution of large-scale magnetic field within solar rotation period. This method of magnetic field presentation can be useful enough in lack of direct magnetograph observations due to sometimes bad weather conditions. We have used the calculated surface harmonics to reconstruct the large-scale magnetic field structure on the source surface near the sun - the origin of heliospheric current sheet and solar wind streams. The obtained results have been compared with spacecraft in situ observations and geomagnetic activity. We tried to show that proposed technique can trace shon-time variations of heliospheric current sheet and short-lived solar wind streams. We have compared also our results with those obtained traditionally from potential field approximation and extrapolation using synoptic charts as initial boundary conditions.

  5. Simulation of buoyancy induced gas mixing tests performed in a large scale containment facility using GOTHIC code

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Z.; Chin, Y.S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    This paper compares containment thermal-hydraulics simulations performed using GOTHIC against a past test set of large scale buoyancy induced helium-air-steam mixing experiments that had been performed at the AECL's Chalk River Laboratories. A number of typical post-accident containment phenomena, including thermal/gas stratification, natural convection, cool air entrainment, steam condensation on concrete walls and active local air cooler, were covered. The results provide useful insights into hydrogen gas mixing behaviour following a loss-of-coolant accident and demonstrate GOTHIC's capability in simulating these phenomena. (author)

  6. Simulation of buoyancy induced gas mixing tests performed in a large scale containment facility using GOTHIC code

    International Nuclear Information System (INIS)

    Liang, Z.; Chin, Y.S.

    2014-01-01

    This paper compares containment thermal-hydraulics simulations performed using GOTHIC against a past test set of large scale buoyancy induced helium-air-steam mixing experiments that had been performed at the AECL's Chalk River Laboratories. A number of typical post-accident containment phenomena, including thermal/gas stratification, natural convection, cool air entrainment, steam condensation on concrete walls and active local air cooler, were covered. The results provide useful insights into hydrogen gas mixing behaviour following a loss-of-coolant accident and demonstrate GOTHIC's capability in simulating these phenomena. (author)

  7. The development of a capability for aerodynamic testing of large-scale wing sections in a simulated natural rain environment

    Science.gov (United States)

    Bezos, Gaudy M.; Cambell, Bryan A.; Melson, W. Edward

    1989-01-01

    A research technique to obtain large-scale aerodynamic data in a simulated natural rain environment has been developed. A 10-ft chord NACA 64-210 wing section wing section equipped with leading-edge and trailing-edge high-lift devices was tested as part of a program to determine the effect of highly-concentrated, short-duration rainfall on airplane performance. Preliminary dry aerodynamic data are presented for the high-lift configuration at a velocity of 100 knots and an angle of attack of 18 deg. Also, data are presented on rainfield uniformity and rainfall concentration intensity levels obtained during the calibration of the rain simulation system.

  8. Proceedings of the Joint IAEA/CSNI Specialists` Meeting on Fracture Mechanics Verification by Large-Scale Testing held at Pollard Auditorium, Oak Ridge, Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    Pugh, C.E.; Bass, B.R.; Keeney, J.A. [comps.] [Oak Ridge National Lab., TN (United States)

    1993-10-01

    This report contains 40 papers that were presented at the Joint IAEA/CSNI Specialists` Meeting Fracture Mechanics Verification by Large-Scale Testing held at the Pollard Auditorium, Oak Ridge, Tennessee, during the week of October 26--29, 1992. The papers are printed in the order of their presentation in each session and describe recent large-scale fracture (brittle and/or ductile) experiments, analyses of these experiments, and comparisons between predictions and experimental results. The goal of the meeting was to allow international experts to examine the fracture behavior of various materials and structures under conditions relevant to nuclear reactor components and operating environments. The emphasis was on the ability of various fracture models and analysis methods to predict the wide range of experimental data now available. The individual papers have been cataloged separately.

  9. Large scale sodium interactions. Part 2. Preliminary test results for limestone concrete

    International Nuclear Information System (INIS)

    Smaardyk, J.E.; Sutherland, H.J.; King, D.L.; Dahlgren, D.A.

    1977-01-01

    Any sodium cooled reactor system must consider the interaction of hot sodium with cell liners, and given either a failed liner or a hypothetical core disruptive accident, the interaction of hot sodium with concrete. The data base available for safety assessments involving these interactions is limited, especially for the concrete and failed liner interactions. To better understand what happens when hot sodium comes in contact with concrete, a series of tests is being carried out to investigate sodium-concrete reactions under conditions which are similar to actual reactor accident conditions. Tests cover the cases of sodium spills on bare concrete and on cells with defective steel liners. Specific objectives have been to obtain a complete description of the sodium/concrete interaction including heat balance, gas evolution and flow, movement and heat generation of the reaction zone, reaction product formation, and the layering or movement of the products

  10. Paranormal psychic believers and skeptics: a large-scale test of the cognitive differences hypothesis.

    Science.gov (United States)

    Gray, Stephen J; Gallo, David A

    2016-02-01

    Belief in paranormal psychic phenomena is widespread in the United States, with over a third of the population believing in extrasensory perception (ESP). Why do some people believe, while others are skeptical? According to the cognitive differences hypothesis, individual differences in the way people process information about the world can contribute to the creation of psychic beliefs, such as differences in memory accuracy (e.g., selectively remembering a fortune teller's correct predictions) or analytical thinking (e.g., relying on intuition rather than scrutinizing evidence). While this hypothesis is prevalent in the literature, few have attempted to empirically test it. Here, we provided the most comprehensive test of the cognitive differences hypothesis to date. In 3 studies, we used online screening to recruit groups of strong believers and strong skeptics, matched on key demographics (age, sex, and years of education). These groups were then tested in laboratory and online settings using multiple cognitive tasks and other measures. Our cognitive testing showed that there were no consistent group differences on tasks of episodic memory distortion, autobiographical memory distortion, or working memory capacity, but skeptics consistently outperformed believers on several tasks tapping analytical or logical thinking as well as vocabulary. These findings demonstrate cognitive similarities and differences between these groups and suggest that differences in analytical thinking and conceptual knowledge might contribute to the development of psychic beliefs. We also found that psychic belief was associated with greater life satisfaction, demonstrating benefits associated with psychic beliefs and highlighting the role of both cognitive and noncognitive factors in understanding these individual differences.

  11. Large-Scale Liquid Hydrogen Testing of Variable Density Multilayer Insulation with a Foam Substrate

    Science.gov (United States)

    Martin, J. J.; Hastings, L.

    2001-01-01

    The multipurpose hydrogen test bed (MHTB), with an 18-cu m liquid hydrogen tank, was used to evaluate a combination foam/multilayer combination insulation (MLI) concept. The foam element (Isofoam SS-1171) insulates during ground hold/ascent flight, and allowed a dry nitrogen purge as opposed to the more complex/heavy helium purge subsystem normally required. The 45-layer MLI was designed for an on-orbit storage period of 45 days. Unique WI features include a variable layer density, larger but fewer double-aluminized Mylar perforations for ascent to orbit venting, and a commercially established roll-wrap installation process that reduced assembly man-hours and resulted in a roust, virtually seamless MLI. Insulation performance was measured during three test series. The spray-on foam insulation (SOFI) successfully prevented purge gas liquefaction within the MLI and resulted in the expected ground hold heat leak of 63 W/sq m. The orbit hold tests resulted in heat leaks of 0.085 and 0.22 W/sq m with warm boundary temperatures of 164 and 305 K, respectively. Compared to the best previously measured performance with a traditional MLI system, a 41-percent heat leak reduction with 25 fewer MLI layers was achieved. The MHTB MLI heat leak is half that calculated for a constant layer density MLI.

  12. Two-dimensional simulation of the gravitational system dynamics and formation of the large-scale structure of the universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.; Kotok, E.V.; Novikov, I.D.; Polyudov, A.N.; Shandarin, S.F.; Sigov, Y.S.

    1980-01-01

    The results of a numerical experiment are given that describe the non-linear stages of the development of perturbations in gravitating matter density in the expanding Universe. This process simulates the formation of the large-scale structure of the Universe from an initially almost homogeneous medium. In the one- and two-dimensional cases of this numerical experiment the evolution of the system from 4096 point masses that interact gravitationally only was studied with periodic boundary conditions (simulation of the infinite space). The initial conditions were chosen that resulted from the theory of the evolution of small perturbations in the expanding Universe. The results of numerical experiments are systematically compared with the approximate analytic theory. The results of the calculations show that in the case of collisionless particles, as well as in the gas-dynamic case, the cellular structure appeared at the non-linear stage in the case of the adiabatic perturbations. The greater part of the matter is in thin layers that separate vast regions of low density. In a Robertson-Walker universe the cellular structure exists for a finite time and then fragments into a few compact objects. In the open Universe the cellular structure also exists if the amplitude of initial perturbations is large enough. But the following disruption of the cellular structure is more difficult because of too rapid an expansion of the Universe. The large-scale structure is frozen. (author)

  13. Large-scale numerical simulations of star formation put to the test

    DEFF Research Database (Denmark)

    Frimann, Søren; Jørgensen, Jes Kristian; Haugbølle, Troels

    2016-01-01

    (SEDs), calculated from large-scalenumerical simulations, to observational studies, thereby aiding in boththe interpretation of the observations and in testing the fidelity ofthe simulations. Methods: The adaptive mesh refinement code,RAMSES, is used to simulate the evolution of a 5 pc × 5 pc ×5 pc...... to calculate evolutionary tracers Tbol andLsmm/Lbol. It is shown that, while the observeddistributions of the tracers are well matched by the simulation, theygenerally do a poor job of tracking the protostellar ages. Disks formearly in the simulation, with 40% of the Class 0 protostars beingencircled by one...

  14. Development, installation and testing of a large-scale tidal current turbine

    Energy Technology Data Exchange (ETDEWEB)

    Thake, J.

    2005-10-15

    This report summarises the findings of the Seaflow project to investigate the feasibility of building and operating a commercial scale marine current horizontal axis tidal turbine and to evaluate the long-term economics of producing electricity using tidal turbines. Details are given of competitive tidal stream technologies and their commercial status, the selection of the site on the North Devon coast of the UK, and the evaluation of the turbine design, manufacture, testing, installation, commissioning, and maintenance of the turbine. The organisations working on the Seaflow project and cost estimations are discussed.

  15. Modelling and operation strategies of DLR's large scale thermocline test facility (TESIS)

    Science.gov (United States)

    Odenthal, Christian; Breidenbach, Nils; Bauer, Thomas

    2017-06-01

    In this work an overview of the TESIS:store thermocline test facility and its current construction status will be given. Based on this, the TESIS:store facility using sensible solid filler material is modelled with a fully transient model, implemented in MATLAB®. Results in terms of the impact of filler site and operation strategies will be presented. While low porosity and small particle diameters for the filler material are beneficial, operation strategy is one key element with potential for optimization. It is shown that plant operators have to ponder between utilization and exergetic efficiency. Different durations of the charging and discharging period enable further potential for optimizations.

  16. Electrodynamic levitated train. Erlangen large-scale test plant is being converted to long stator technology

    Energy Technology Data Exchange (ETDEWEB)

    Muckelberg, E

    1976-10-01

    The development work for a future high-power fast train have been marked for years by the competition of two magnetic levitation systems, i.e., the electrodynamic levitation system (EDS) with superconducting magnets and the electromagnetic levitation system (EMS). The present study particularly deals with the EDS system. The vehicle is driven by a linear motor. The levitation height is between 10 cm and 30 cm without any complicated control in the EDS system. The disadvantage with this system, however, is that a starting and landing device is needed as a certain starting speed is required before the levitation process fully begins. The first levitation tests were possible on a round course at the beginning of May 1976. A second test stand is being put into operation at present. The first results are reported. Finally, possible development trends are indicated. It seems possible that the end project 'high-power fast train' will be a combination of the EMS and EDS systems.

  17. Large-scale demonstration test plan for digface data acquisition system

    International Nuclear Information System (INIS)

    Roybal, L.G.; Svoboda, J.M.

    1994-11-01

    Digface characterization promotes the use of online site characterization and monitoring during waste retrieval efforts, a need that arises from safety and efficiency considerations during the cleanup of a complex waste site. Information concerning conditions at the active digface can be used by operators as a basis for adjusting retrieval activities to reduce safety risks and to promote an efficient transition between retrieval and downstream operations. Most importantly, workers are given advance warning of upcoming dangerous conditions. In addition, detailed knowledge of digface conditions provides a basis for selecting tools and methods that avoid contamination spread and work stoppages. In FY-94, work began in support of a largescale demonstration coordinating the various facets of a prototype digface remediation operation including characterization, contaminant suppression, and cold waste retrieval. This test plan describes the activities that will be performed during the winter of FY-95 that are necessary to assess the performance of the data acquisition and display system in its initial integration with hardware developed in the Cooperative Telerobotic Retrieval (CTR) program. The six specific objectives of the test are determining system electrical noise, establishing a dynamic background signature of the gantry crane and associated equipment, determining the resolution of the overall system by scanning over known objects, reporting the general functionality of the overall data acquisition system, evaluating the laser topographic functionality, and monitoring the temperature control features of the electronic package

  18. Large scale access tests and online interfaces to ATLAS conditions databases

    International Nuclear Information System (INIS)

    Amorim, A; Lopes, L; Pereira, P; Simoes, J; Soloviev, I; Burckhart, D; Schmitt, J V D; Caprini, M; Kolos, S

    2008-01-01

    The access of the ATLAS Trigger and Data Acquisition (TDAQ) system to the ATLAS Conditions Databases sets strong reliability and performance requirements on the database storage and access infrastructures. Several applications were developed to support the integration of Conditions database access with the online services in TDAQ, including the interface to the Information Services (IS) and to the TDAQ Configuration Databases. The information storage requirements were the motivation for the ONline A Synchronous Interface to COOL (ONASIC) from the Information Service (IS) to LCG/COOL databases. ONASIC avoids the possible backpressure from Online Database servers by managing a local cache. In parallel, OKS2COOL was developed to store Configuration Databases into an Offline Database with history record. The DBStressor application was developed to test and stress the access to the Conditions database using the LCG/COOL interface while operating in an integrated way as a TDAQ application. The performance scaling of simultaneous Conditions database read accesses was studied in the context of the ATLAS High Level Trigger large computing farms. A large set of tests were performed involving up to 1000 computing nodes that simultaneously accessed the LCG central database server infrastructure at CERN

  19. On Hierarchical Extensions of Large-Scale 4-regular Grid Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Patel, A.; Knudsen, Thomas Phillip

    2004-01-01

    dependencies between the number of nodes and the distances in the structures. The perfect square mesh is introduced for hierarchies, and it is shown that applying ordered hierarchies in this way results in logarithmic dependencies between the number of nodes and the distances, resulting in better scaling...... structures. For example, in a mesh of 391876 nodes the average distance is reduced from 417.33 to 17.32 by adding hierarchical lines. This is gained by increasing the number of lines by 4.20% compared to the non-hierarchical structure. A similar hierarchical extension of the torus structure also results...

  20. Fast large-scale clustering of protein structures using Gauss integrals

    DEFF Research Database (Denmark)

    Harder, Tim; Borg, Mikael; Boomsma, Wouter

    2011-01-01

    trajectories. Results: We present Pleiades, a novel approach to clustering protein structures with a rigorous mathematical underpinning. The method approximates clustering based on the root mean square deviation by rst mapping structures to Gauss integral vectors – which were introduced by Røgen and co......-workers – and subsequently performing K-means clustering. Conclusions: Compared to current methods, Pleiades dramatically improves on the time needed to perform clustering, and can cluster a signicantly larger number of structures, while providing state-ofthe- art results. The number of low energy structures generated...

  1. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  2. A nonparametric empirical Bayes framework for large-scale multiple testing.

    Science.gov (United States)

    Martin, Ryan; Tokdar, Surya T

    2012-07-01

    We propose a flexible and identifiable version of the 2-groups model, motivated by hierarchical Bayes considerations, that features an empirical null and a semiparametric mixture model for the nonnull cases. We use a computationally efficient predictive recursion (PR) marginal likelihood procedure to estimate the model parameters, even the nonparametric mixing distribution. This leads to a nonparametric empirical Bayes testing procedure, which we call PRtest, based on thresholding the estimated local false discovery rates. Simulations and real data examples demonstrate that, compared to existing approaches, PRtest's careful handling of the nonnull density can give a much better fit in the tails of the mixture distribution which, in turn, can lead to more realistic conclusions.

  3. Fabrication and testing of gas filled targets for large scale plasma experiments on Nova

    International Nuclear Information System (INIS)

    Stone, G.F.; Spragge, M.; Wallace, R.J.; Rivers, C.J.

    1995-01-01

    An experimental campaign on the Nova laser was started in July 1993 to study one st of target conditions for the point design of the National Ignition Facility (NIF). The targets were specified to investigate the current NIF target conditions--a plasma of ∼3 keV electron temperature and an electron density of ∼1.0 E + 21 cm -3 . A gas cell target design was chosen to confine as gas of ∼0.01 cm 3 in volume at ∼ 1 atmosphere. This paper will describe the major steps and processes necessary in the fabrication, testing and delivery of these targets for shots on the Nova Laser at LLNL

  4. On Hierarchical Extensions of Large-Scale 4-regular Grid Network Structures

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Patel, A.; Knudsen, Thomas Phillip

    It is studied how the introduction of ordered hierarchies in 4-regular grid network structures decreses distances remarkably, while at the same time allowing for simple topological routing schemes. Both meshes and tori are considered; in both cases non-hierarchical structures have power law depen...

  5. GRASP92: a package for large-scale relativistic atomic structure calculations

    Science.gov (United States)

    Parpia, F. A.; Froese Fischer, C.; Grant, I. P.

    2006-12-01

    Program summaryTitle of program: GRASP92 Catalogue identifier: ADCU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADCU_v1_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: no Programming language used: Fortran Computer: IBM POWERstation 320H Operating system: IBM AIX 3.2.5+ RAM: 64M words No. of lines in distributed program, including test data, etc.: 65 224 No of bytes in distributed program, including test data, etc.: 409 198 Distribution format: tar.gz Catalogue identifier of previous version: ADCU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 94 (1996) 249 Does the new version supersede the previous version?: Yes Nature of problem: Prediction of atomic spectra—atomic energy levels, oscillator strengths, and radiative decay rates—using a 'fully relativistic' approach. Solution method: Atomic orbitals are assumed to be four-component spinor eigenstates of the angular momentum operator, j=l+s, and the parity operator Π=βπ. Configuration state functions (CSFs) are linear combinations of Slater determinants of atomic orbitals, and are simultaneous eigenfunctions of the atomic electronic angular momentum operator, J, and the atomic parity operator, P. Lists of CSFs are either explicitly prescribed by the user or generated from a set of reference CSFs, a set of subshells, and rules for deriving other CSFs from these. Approximate atomic state functions (ASFs) are linear combinations of CSFs. A variational functional may be constructed by combining expressions for the energies of one or more ASFs. Average level (AL) functionals are weighted sums of energies of all possible ASFs that may be constructed from a set of CSFs; the number of ASFs is then the same as the number, n, of CSFs. Optimal level (OL) functionals are weighted sums of energies of some subset of ASFs; the GRASP92 package is optimized for this latter class of functionals. The composition of an ASF in terms

  6. Performance on large-scale science tests: Item attributes that may impact achievement scores

    Science.gov (United States)

    Gordon, Janet Victoria

    , characteristics of test items themselves and/or opportunities to learn. Suggestions for future research are made.

  7. Dynamic Arrest in Charged Colloidal Systems Exhibiting Large-Scale Structural Heterogeneities

    International Nuclear Information System (INIS)

    Haro-Perez, C.; Callejas-Fernandez, J.; Hidalgo-Alvarez, R.; Rojas-Ochoa, L. F.; Castaneda-Priego, R.; Quesada-Perez, M.; Trappe, V.

    2009-01-01

    Suspensions of charged liposomes are found to exhibit typical features of strongly repulsive fluid systems at short length scales, while exhibiting structural heterogeneities at larger length scales that are characteristic of attractive systems. We model the static structure factor of these systems using effective pair interaction potentials composed of a long-range attraction and a shorter range repulsion. Our modeling of the static structure yields conditions for dynamically arrested states at larger volume fractions, which we find to agree with the experimentally observed dynamics

  8. New Insights about Enzyme Evolution from Large Scale Studies of Sequence and Structure Relationships*

    Science.gov (United States)

    Brown, Shoshana D.; Babbitt, Patricia C.

    2014-01-01

    Understanding how enzymes have evolved offers clues about their structure-function relationships and mechanisms. Here, we describe evolution of functionally diverse enzyme superfamilies, each representing a large set of sequences that evolved from a common ancestor and that retain conserved features of their structures and active sites. Using several examples, we describe the different structural strategies nature has used to evolve new reaction and substrate specificities in each unique superfamily. The results provide insight about enzyme evolution that is not easily obtained from studies of one or only a few enzymes. PMID:25210038

  9. New insights about enzyme evolution from large scale studies of sequence and structure relationships.

    Science.gov (United States)

    Brown, Shoshana D; Babbitt, Patricia C

    2014-10-31

    Understanding how enzymes have evolved offers clues about their structure-function relationships and mechanisms. Here, we describe evolution of functionally diverse enzyme superfamilies, each representing a large set of sequences that evolved from a common ancestor and that retain conserved features of their structures and active sites. Using several examples, we describe the different structural strategies nature has used to evolve new reaction and substrate specificities in each unique superfamily. The results provide insight about enzyme evolution that is not easily obtained from studies of one or only a few enzymes. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.

  10. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-01-01

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome

  11. New insights about enzyme evolution from large scale studies of sequence and structure relationships

    OpenAIRE

    Babbitt, Patricia; Brown, SD; Babbitt, PC

    2014-01-01

    © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.Understanding how enzymes have evolved offers clues about their structure-function relationships and mechanisms. Here, we describe evolution of functionally diverse enzyme superfami

  12. Deep learning-based subdivision approach for large scale macromolecules structure recovery from electron cryo tomograms.

    Science.gov (United States)

    Xu, Min; Chai, Xiaoqi; Muthakana, Hariank; Liang, Xiaodan; Yang, Ge; Zeev-Ben-Mordehai, Tzviya; Xing, Eric P

    2017-07-15

    Cellular Electron CryoTomography (CECT) enables 3D visualization of cellular organization at near-native state and in sub-molecular resolution, making it a powerful tool for analyzing structures of macromolecular complexes and their spatial organizations inside single cells. However, high degree of structural complexity together with practical imaging limitations makes the systematic de novo discovery of structures within cells challenging. It would likely require averaging and classifying millions of subtomograms potentially containing hundreds of highly heterogeneous structural classes. Although it is no longer difficult to acquire CECT data containing such amount of subtomograms due to advances in data acquisition automation, existing computational approaches have very limited scalability or discrimination ability, making them incapable of processing such amount of data. To complement existing approaches, in this article we propose a new approach for subdividing subtomograms into smaller but relatively homogeneous subsets. The structures in these subsets can then be separately recovered using existing computation intensive methods. Our approach is based on supervised structural feature extraction using deep learning, in combination with unsupervised clustering and reference-free classification. Our experiments show that, compared with existing unsupervised rotation invariant feature and pose-normalization based approaches, our new approach achieves significant improvements in both discrimination ability and scalability. More importantly, our new approach is able to discover new structural classes and recover structures that do not exist in training data. Source code freely available at http://www.cs.cmu.edu/∼mxu1/software . mxu1@cs.cmu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  13. Socio-Cognitive Phenotypes Differentially Modulate Large-Scale Structural Covariance Networks.

    Science.gov (United States)

    Valk, Sofie L; Bernhardt, Boris C; Böckler, Anne; Trautwein, Fynn-Mathis; Kanske, Philipp; Singer, Tania

    2017-02-01

    Functional neuroimaging studies have suggested the existence of 2 largely distinct social cognition networks, one for theory of mind (taking others' cognitive perspective) and another for empathy (sharing others' affective states). To address whether these networks can also be dissociated at the level of brain structure, we combined behavioral phenotyping across multiple socio-cognitive tasks with 3-Tesla MRI cortical thickness and structural covariance analysis in 270 healthy adults, recruited across 2 sites. Regional thickness mapping only provided partial support for divergent substrates, highlighting that individual differences in empathy relate to left insular-opercular thickness while no correlation between thickness and mentalizing scores was found. Conversely, structural covariance analysis showed clearly divergent network modulations by socio-cognitive and -affective phenotypes. Specifically, individual differences in theory of mind related to structural integration between temporo-parietal and dorsomedial prefrontal regions while empathy modulated the strength of dorsal anterior insula networks. Findings were robust across both recruitment sites, suggesting generalizability. At the level of structural network embedding, our study provides a double dissociation between empathy and mentalizing. Moreover, our findings suggest that structural substrates of higher-order social cognition are reflected rather in interregional networks than in the the local anatomical markup of specific regions per se. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU

    Science.gov (United States)

    Lara, A.; Niembro, T.

    2017-12-01

    We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.

  15. Large-scale nuclear structure calculations for spin-dependent WIMP scattering with chiral effective field theory currents

    OpenAIRE

    Klos, P.; Menéndez, J.; Gazit, D.; Schwenk, A.

    2013-01-01

    We perform state-of-the-art large-scale shell-model calculations of the structure factors for elastic spin-dependent WIMP scattering off 129,131Xe, 127I, 73Ge, 19F, 23Na, 27Al, and 29Si. This comprehensive survey covers the non-zero-spin nuclei relevant to direct dark matter detection. We include a pedagogical presentation of the formalism necessary to describe elastic and inelastic WIMP-nucleus scattering. The valence spaces and nuclear interactions employed have been previously used in nucl...

  16. Developing eThread Pipeline Using SAGA-Pilot Abstraction for Large-Scale Structural Bioinformatics

    Directory of Open Access Journals (Sweden)

    Anjani Ragothaman

    2014-01-01

    Full Text Available While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.

  17. Fiber Optic Rosette Strain Gauge Development and Application on a Large-Scale Composite Structure

    Science.gov (United States)

    Moore, Jason P.; Przekop, Adam; Juarez, Peter D.; Roth, Mark C.

    2015-01-01

    A detailed description of the construction, application, and measurement of 196 FO rosette strain gauges that measured multi-axis strain across the outside upper surface of the forward bulkhead component of a multibay composite fuselage test article is presented. A background of the FO strain gauge and the FO measurement system as utilized in this application is given and results for the higher load cases of the testing sequence are shown.

  18. Applying 4-regular grid structures in large-scale access networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Knudsen, Thomas P.; Patel, Ahmed

    2006-01-01

    4-Regular grid structures have been used in multiprocessor systems for decades due to a number of nice properties with regard to routing, protection, and restoration, together with a straightforward planar layout. These qualities are to an increasing extent demanded also in largescale access...... networks, but concerning protection and restoration these demands have been met only to a limited extent by the commonly used ring and tree structures. To deal with the fact that classical 4-regular grid structures are not directly applicable in such networks, this paper proposes a number of extensions...... concerning restoration, protection, scalability, embeddability, flexibility, and cost. The extensions are presented as a tool case, which can be used for implementing semi-automatic and in the longer term full automatic network planning tools....

  19. Design of a Generic and Flexible Data Structure for Efficient Formulation of Large Scale Network Problems

    DEFF Research Database (Denmark)

    Quaglia, Alberto; Sarup, Bent; Sin, Gürkan

    2013-01-01

    structure for efficient formulation of enterprise-wide optimization problems is presented. Through the integration of the described data structure in our synthesis and design framework, the problem formulation workflow is automated in a software tool, reducing time and resources needed to formulate large......The formulation of Enterprise-Wide Optimization (EWO) problems as mixed integer nonlinear programming requires collecting, consolidating and systematizing large amount of data, coming from different sources and specific to different disciplines. In this manuscript, a generic and flexible data...... problems, while ensuring at the same time data consistency and quality at the application stage....

  20. Large scale identification and categorization of protein sequences using structured logistic regression

    DEFF Research Database (Denmark)

    Pedersen, Bjørn Panella; Ifrim, Georgiana; Liboriussen, Poul

    2014-01-01

    Abstract Background Structured Logistic Regression (SLR) is a newly developed machine learning tool first proposed in the context of text categorization. Current availability of extensive protein sequence databases calls for an automated method to reliably classify sequences and SLR seems well...... problem. Results Using SLR, we have built classifiers to identify and automatically categorize P-type ATPases into one of 11 pre-defined classes. The SLR-classifiers are compared to a Hidden Markov Model approach and shown to be highly accurate and scalable. Representing the bulk of currently known...... for further biochemical characterization and structural analysis....

  1. An improved method to characterise the modulation of small-scale turbulent by large-scale structures

    Science.gov (United States)

    Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta

    2015-11-01

    A key aspect of turbulent boundary layer dynamics is ``modulation,'' which refers to degree to which the intensity of coherent large-scale structures (LS) cause an amplification or attenuation of the intensity of the small-scale structures (SS) through large-scale-linkage. In order to identify the variation of the amplitude of the SS motion, the envelope of the fluctuations needs to be determined. Mathis et al. (2009) proposed to define this latter by low-pass filtering the modulus of the analytic signal built from the Hilbert transform of SS. The validity of this definition, as a basis for quantifying the modulated SS signal, is re-examined on the basis of DNS data for a channel flow. The analysis shows that the modulus of the analytic signal is very sensitive to the skewness of its PDF, which is dependent, in turn, on the sign of the LS fluctuation and thus of whether these fluctuations are associated with sweeps or ejections. The conclusion is that generating an envelope by use of a low-pass filtering step leads to an important loss of information associated with the effects of the local skewness of the PDF of the SS on the modulation process. An improved Hilbert-transform-based method is proposed to characterize the modulation of SS turbulence by LS structures

  2. High spatial resolution measurements of large-scale three-dimensional structures in a turbulent boundary layer

    Science.gov (United States)

    Atkinson, Callum; Buchmann, Nicolas; Kuehn, Matthias; Soria, Julio

    2011-11-01

    Large-scale three-dimensional (3D) structures in a turbulent boundary layer at Reθ = 2000 are examined via the streamwise extrapolation of time-resolved stereo particle image velocimetry (SPIV) measurements in a wall-normal spanwise plane using Taylor's hypothesis. Two overlapping SPIV systems are used to provide a field of view similar to that of direct numerical simulations (DNS) on the order of 50 δ × 1 . 5 δ × 3 . 0 δ in the streamwise, wall-normal and spanwise directions, respectively, with an interrogation window size of 40+ ×20+ ×60+ wall units. Velocity power spectra are compared with DNS to examine the effective resolution of these measurements and two-point correlations are performed to investigate the integral length scales associated with coherent velocity and vorticity fluctuations. Individual coherent structures are detected to provide statistics on the 3D size, spacing, and angular orientation of large-scale structures, as well as their contribution to the total turbulent kinetic energy and Reynolds shear stress. The support of the ARC through Discovery (and LIEF) grants is gratefully acknowledged.

  3. Screening and large-scale expression of membrane proteins in mammalian cells for structural studies

    OpenAIRE

    Goehring, April; Lee, Chia-Hsueh; Wang, Kevin H.; Michel, Jennifer Carlisle; Claxton, Derek P.; Baconguis, Isabelle; Althoff, Thorsten; Fischer, Suzanne; Garcia, K. Christopher; Gouaux, Eric

    2014-01-01

    Structural, biochemical and biophysical studies of eukaryotic membrane proteins are often hampered by difficulties in over-expression of the candidate molecule. Baculovirus transduction of mammalian cells (BacMam), although a powerful method to heterologously express membrane proteins, can be cumbersome for screening and expression of multiple constructs. We therefore developed plasmid Eric Gouaux (pEG) BacMam, a vector optimized for use in screening assays, as well as for efficient productio...

  4. Mummy Lake: An unroofed ceremonial structure within a large-scale ritual landscape

    Science.gov (United States)

    Benson, Larry V.; Griffin, Eleanor R.; Stein, J.R.; Friedman, R. A.; Andrae, S. W.

    2014-01-01

    The structure at Mesa Verde National Park known historically as Mummy Lake and more recently as Far View Reservoir is not part of a water collection, impoundment, or redistribution system. We offer an alternative explanation for the function of Mummy Lake. We suggest that it is an unroofed ceremonial structure, and that it serves as an essential component of a Chacoan ritual landscape. A wide constructed avenue articulates Mummy Lake with Far View House and Pipe Shrine House. The avenue continues southward for approximately 6 km where it apparently divides connecting with Spruce Tree House and Sun Temple/Cliff Palace. The avenue has previously been interpreted as an irrigation ditch fed by water impounded at Mummy Lake; however, it conforms in every respect to alignments described as Chacoan roads. Tree-ring dates indicate that the construction of Spruce Tree House and Cliff Palace began about A.D. 1225, roughly coincident with the abandonment of the Far View community. This pattern of periodically relocating the focus of an Anasazi community by retiring existing ritual structures and linking them to newly constructed facilities by means of broad avenues was first documented by Fowler and Stein (1992) in Manuelito Canyon, New Mexico. Periods of intense drought appear to have contributed to the relocation of prehistoric Native Americans from the Far View group to Cliff Palace/Spruce Tree House in the mid-13th century and eventually to the abandonment of all Anasazi communities in southwestern Colorado in the late-13th century.

  5. Macro optical projection tomography for large scale 3D imaging of plant structures and gene activity.

    Science.gov (United States)

    Lee, Karen J I; Calder, Grant M; Hindle, Christopher R; Newman, Jacob L; Robinson, Simon N; Avondo, Jerome J H Y; Coen, Enrico S

    2017-01-01

    Optical projection tomography (OPT) is a well-established method for visualising gene activity in plants and animals. However, a limitation of conventional OPT is that the specimen upper size limit precludes its application to larger structures. To address this problem we constructed a macro version called Macro OPT (M-OPT). We apply M-OPT to 3D live imaging of gene activity in growing whole plants and to visualise structural morphology in large optically cleared plant and insect specimens up to 60 mm tall and 45 mm deep. We also show how M-OPT can be used to image gene expression domains in 3D within fixed tissue and to visualise gene activity in 3D in clones of growing young whole Arabidopsis plants. A further application of M-OPT is to visualise plant-insect interactions. Thus M-OPT provides an effective 3D imaging platform that allows the study of gene activity, internal plant structures and plant-insect interactions at a macroscopic scale. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  6. Towards very large scale laboratory simulation of structure-foundation-soil interaction (SFSI) problems

    OpenAIRE

    Taylor, Colin A.; Crewe, Adam J.; Mylonakis, George

    2016-01-01

    We are at the maturity convergence point of a set of actuation, control, instrumentation and data analysis technologies that make it feasible to construct laboratory experimental rigs that will allow us to address key controlling uncertainties in SFS I assessment and design, which can only be addressed by testing at, or near to, prototype scale. This paper will explore the process of innovation that must be established in order to integrate these enabling technologies and thereby create novel...

  7. Searching for filaments and large-scale structure around DAFT/FADA clusters

    Science.gov (United States)

    Durret, F.; Márquez, I.; Acebrón, A.; Adami, C.; Cabrera-Lavers, A.; Capelato, H.; Martinet, N.; Sarron, F.; Ulmer, M. P.

    2016-04-01

    Context. Clusters of galaxies are located at the intersection of cosmic filaments and are still accreting galaxies and groups along these preferential directions. However, because of their relatively low contrast on the sky, filaments are difficult to detect (unless a large amount of spectroscopic data are available), and unambiguous detections have been limited until now to relatively low redshifts (zDAFT/FADA survey for which we had deep wide field photometric data. For each cluster, based on a colour-magnitude diagram, we selected galaxies that were likely to belong to the red sequence, and hence to be at the cluster redshift, and built density maps. By computing the background for each of these maps and drawing 3σ contours, we estimated the elongations of the structures detected in this way. Whenever possible, we identified the other structures detected on the density maps with clusters listed in NED. Results: We find clear elongations in twelve clusters out of thirty, with sizes that can reach up to 7.6 Mpc. Eleven other clusters have neighbouring structures, but the zones linking them are not detected in the density maps at a 3σ level. Three clusters show no extended structure and no neighbours, and four clusters are of too low contrast to be clearly visible on our density maps. Conclusions: The simple method we have applied appears to work well to show the existence of filaments and/or extensions around a number of clusters in the redshift range 0.4

  8. Outdoor thermal monitoring of large scale structures by infrared thermography integrated in an ICT based architecture

    Science.gov (United States)

    Dumoulin, Jean; Crinière, Antoine; Averty, Rodolphe

    2015-04-01

    An infrared system has been developed to monitor transport infrastructures in a standalone configuration. Results obtained on bridges open to traffic allows to retrieve the inner structure of the decks. To complete this study, experiments were carried out over several months to monitor two reinforced concrete beams of 16 m long and 21 T each. Detection of a damaged area over one of the two beams was made by Pulse Phase Thermography approach. Measurements carried out over several months. Finally, conclusion on the robustness of the system is proposed and perspectives are presented.

  9. LBB assessment on ferrite piping structure of large-scale FBR

    OpenAIRE

    兪 淵植

    2002-01-01

    These days, this interest on LBB(Leak before Break) design becomes to be rising in the viewpoint of the cost reduction and structural inter-grity for the commercialization of FBR plants, LBB design enables pla-nts to be shut down safely before occuring unstable fracture by dete- cting the leak rates even if a crack initiates and penetrates a wall thickness. It is necessary to assess crack growth and penetration be- havior considering in-service conditions under operation temperature, leak re...

  10. Large-Scale, Exhaustive Lattice-Based Structural Auditing of SNOMED CT

    Science.gov (United States)

    Zhang, Guo-Qiang

    One criterion for the well-formedness of ontologies is that their hierarchical structure form a lattice. Formal Concept Analysis (FCA) has been used as a technique for assessing the quality of ontologies, but is not scalable to large ontologies such as SNOMED CT. We developed a methodology called Lattice-based Structural Auditing (LaSA), for auditing biomedical ontologies, implemented through automated SPARQL queries, in order to exhaustively identify all non-lattice pairs in SNOMED CT. The percentage of non-lattice pairs ranges from 0 to 1.66 among the 19 SNOMED CT hierarchies. Preliminary manual inspection of a limited portion of the 518K non-lattice pairs, among over 34 million candidate pairs, revealed inconsistent use of precoordination in SNOMED CT, but also a number of false positives. Our results are consistent with those based on FCA, with the advantage that the LaSA computational pipeline is scalable and applicable to ontological systems consisting mostly of taxonomic links. This work is based on collaboration with Olivier Bodenreider from the National Library of Medicine, Bethesda, USA.

  11. Large-scale variation in lithospheric structure along and across the Kenya rift

    Science.gov (United States)

    Prodehl, C.; Mechie, J.; Kaminski, W.; Fuchs, K.; Grosse, C.; Hoffmann, H.; Stangl, R.; Stellrecht, R.; Khan, M.A.; Maguire, Peter K.H.; Kirk, W.; Keller, Gordon R.; Githui, A.; Baker, M.; Mooney, W.; Criley, E.; Luetgert, J.; Jacob, B.; Thybo, H.; Demartin, M.; Scarascia, S.; Hirn, A.; Bowman, J.R.; Nyambok, I.; Gaciri, S.; Patel, J.; Dindi, E.; Griffiths, D.H.; King, R.F.; Mussett, A.E.; Braile, L.W.; Thompson, G.; Olsen, K.; Harder, S.; Vees, R.; Gajewski, D.; Schulte, A.; Obel, J.; Mwango, F.; Mukinya, J.; Riaroh, D.

    1991-01-01

    The Kenya rift is one of the classic examples of a continental rift zone: models for its evolution range from extension of the lithosphere by pure shear1, through extension by simple shear2, to diapiric upwelling of an asthenolith3. Following a pilot study in 19854, the present work involved the shooting of three seismic refraction and wide-angle reflection profiles along the axis, across the margins, and on the northeastern flank of the rift (Fig. 1). These lines were intended to reconcile the different crustal thickness estimates for the northern and southern parts of the rift4-6 and to reveal the structure across the rift, including that beneath the flanks. The data, presented here, reveal significant lateral variations in structure both along and across the rift. The crust thins along the rift axis from 35 km in the south to 20 km in the north; there are abrupt changes in Mono depth and uppermost-mantle seismic velocity across the rift margins, and crustal thickening across the boundary between the Archaean craton and PanAfrican orogenic belt immediately west of the rift. These results suggest that thickened crust may have controlled the rift's location, that there is a decrease in extension from north to south, and that the upper mantle immediately beneath the rift may contain reservoirs of magma generated at greater depth.

  12. Brans-Dicke Theory with Λ>0: Black Holes and Large Scale Structures.

    Science.gov (United States)

    Bhattacharya, Sourav; Dialektopoulos, Konstantinos F; Romano, Antonio Enea; Tomaras, Theodore N

    2015-10-30

    A step-by-step approach is followed to study cosmic structures in the context of Brans-Dicke theory with positive cosmological constant Λ and parameter ω. First, it is shown that regular stationary black-hole solutions not only have constant Brans-Dicke field ϕ, but can exist only for ω=∞, which forces the theory to coincide with the general relativity. Generalizations of the theory in order to evade this black-hole no-hair theorem are presented. It is also shown that in the absence of a stationary cosmological event horizon in the asymptotic region, a stationary black-hole horizon can support a nontrivial Brans-Dicke hair. Even more importantly, it is shown next that the presence of a stationary cosmological event horizon rules out any regular stationary solution, appropriate for the description of a star. Thus, to describe a star one has to assume that there is no such stationary horizon in the faraway asymptotic region. Under this implicit assumption generic spherical cosmic structures are studied perturbatively and it is shown that only for ω>0 or ω≲-5 their predicted maximum sizes are consistent with observations. We also point out how, many of the conclusions of this work differ qualitatively from the Λ=0 spacetimes.

  13. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Science.gov (United States)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-11-01

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  14. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Energy Technology Data Exchange (ETDEWEB)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-10-18

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy–galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  15. Analysis of flexible fabric structures for large-scale subsea compressed air energy storage

    International Nuclear Information System (INIS)

    Pimm, A; Garvey, S

    2009-01-01

    The idea of storing compressed air in submerged flexible fabric structures anchored to the seabed is being investigated for its potential to be a clean, economically-attractive means of energy storage which could integrate well with offshore renewable energy conversion. In this paper a simple axisymmetric model of an inextensional pressurised bag is presented, along with its implementation in a constrained multidimensional optimization used to minimise the cost of the bag materials per unit of stored energy. Base pressure difference and circumferential stress are included in the optimization, and the effect of hanging ballast masses from the inside of the bag is also considered. Results are given for a zero pressure natural shape bag, a zero pressure bag with circumferential stress and hanging masses, and a nonzero pressure bag with circumferential stress and hanging masses.

  16. The maximum sizes of large scale structures in alternative theories of gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Sourav [IUCAA, Pune University Campus, Post Bag 4, Ganeshkhind, Pune, 411 007 India (India); Dialektopoulos, Konstantinos F. [Dipartimento di Fisica, Università di Napoli ' Federico II' , Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, Napoli, I-80126 Italy (Italy); Romano, Antonio Enea [Instituto de Física, Universidad de Antioquia, Calle 70 No. 52–21, Medellín (Colombia); Skordis, Constantinos [Department of Physics, University of Cyprus, 1 Panepistimiou Street, Nicosia, 2109 Cyprus (Cyprus); Tomaras, Theodore N., E-mail: sbhatta@iitrpr.ac.in, E-mail: kdialekt@gmail.com, E-mail: aer@phys.ntu.edu.tw, E-mail: skordis@ucy.ac.cy, E-mail: tomaras@physics.uoc.gr [Institute of Theoretical and Computational Physics and Department of Physics, University of Crete, 70013 Heraklion (Greece)

    2017-07-01

    The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.

  17. The topology of large-scale structure. V - Two-dimensional topology of sky maps

    Science.gov (United States)

    Gott, J. R., III; Mao, Shude; Park, Changbom; Lahav, Ofer

    1992-01-01

    A 2D algorithm is applied to observed sky maps and numerical simulations. It is found that when topology is studied on smoothing scales larger than the correlation length, the topology is approximately in agreement with the random phase formula for the 2D genus-threshold density relation, G2(nu) varies as nu(e) exp-nu-squared/2. Some samples show small 'meatball shifts' similar to those seen in corresponding 3D observational samples and similar to those produced by biasing in cold dark matter simulations. The observational results are thus consistent with the standard model in which the structure in the universe today has grown from small fluctuations caused by random quantum noise in the early universe.

  18. Stochastic inflation lattice simulations: Ultra-large scale structure of the universe

    International Nuclear Information System (INIS)

    Salopek, D.S.

    1990-11-01

    Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients α -1 triangledown small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a ''toy model'' with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Guassian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits. 21 refs., 3 figs

  19. Three-dimensional simulation of large-scale structure in the universe

    Energy Technology Data Exchange (ETDEWEB)

    Centrella, J.; Melott, A.L.

    1983-09-15

    High and low density cloud-in-cell models were used to simulate the nonlinear growth of adiabatic perturbations in collisionless matter to demonstrate the development of a cellular structure in the universe. Account was taken of a short wvelength cutoff in collisionless matter, with a focus on resolving filaments and low density pancakes. The calculations were performed with a Friedmann-Robertson-Walker model, and the gravitational potential of dark matter was obtained through solution of the Poisson equation. The simulation began with z between 100-1000, and initial particle velocities were set at zero. Spherically symmetric voids were observed to form, then colide and interact. Sufficient particles were employed to avoid depletion during nonlinear collapse. No galaxies formed during the epoch studied, which has implications for the significance of dark, baryonic matter in the present universe.

  20. A Structural Evaluation of a Large-Scale Quasi-Experimental Microfinance Initiative.

    Science.gov (United States)

    Kaboski, Joseph P; Townsend, Robert M

    2011-09-01

    This paper uses a structural model to understand, predict, and evaluate the impact of an exogenous microcredit intervention program, the Thai Million Baht Village Fund program. We model household decisions in the face of borrowing constraints, income uncertainty, and high-yield indivisible investment opportunities. After estimation of parameters using pre-program data, we evaluate the model's ability to predict and interpret the impact of the village fund intervention. Simulations from the model mirror the data in yielding a greater increase in consumption than credit, which is interpreted as evidence of credit constraints. A cost-benefit analysis using the model indicates that some households value the program much more than its per household cost, but overall the program costs 20 percent more than the sum of these benefits.

  1. An evolutionary theory of large-scale human warfare: Group-structured cultural selection.

    Science.gov (United States)

    Zefferman, Matthew R; Mathew, Sarah

    2015-01-01

    When humans wage war, it is not unusual for battlefields to be strewn with dead warriors. These warriors typically were men in their reproductive prime who, had they not died in battle, might have gone on to father more children. Typically, they are also genetically unrelated to one another. We know of no other animal species in which reproductively capable, genetically unrelated individuals risk their lives in this manner. Because the immense private costs borne by individual warriors create benefits that are shared widely by others in their group, warfare is a stark evolutionary puzzle that is difficult to explain. Although several scholars have posited models of the evolution of human warfare, these models do not adequately explain how humans solve the problem of collective action in warfare at the evolutionarily novel scale of hundreds of genetically unrelated individuals. We propose that group-structured cultural selection explains this phenomenon. © 2015 Wiley Periodicals, Inc.

  2. Impact of ultralight axion self-interactions on the large scale structure of the Universe

    Science.gov (United States)

    Desjacques, Vincent; Kehagias, Alex; Riotto, Antonio

    2018-01-01

    Ultralight axions have sparked attention because their tiny mass m ˜10-22 eV , which leads to a kiloparsec-scale de Broglie wavelength comparable to the size of a dwarf galaxy, could alleviate the so-called small-scale crisis of massive cold dark matter (CDM) candidates. However, recent analyses of the Lyman-α forest power spectrum set a tight lower bound on their mass of m ≳10-21 eV which makes them much less relevant from an astrophysical point of view. An important caveat to these numerical studies is that they do not take into account self-interactions among ultralight axions. Furthermore, for axions which acquired a mass through nonperturbative effects, this self-interaction is attractive and, therefore, could counteract the quantum "pressure" induced by the strong delocalization of the particles. In this work, we show that even a tiny attractive interaction among ultralight axions can have a significant impact on the stability of cosmic structures at low redshift. After a brief review of known results about solitons in the absence of gravity, we discuss the stability of filamentary and pancakelike solutions when quantum pressure, attractive interactions and gravity are present. The analysis based on 1 degree of freedom, namely the breathing mode, reveals that pancakes are stable, while filaments are unstable if the mass per unit length is larger than a critical value. However, we show that pancakes are unstable against transverse perturbations. We expect this to be true for halos and filaments as well. Instabilities driven by the breathing mode will not be seen in the low column density Lyman-α forest unless the axion decay constant is extremely small, f ≲1013 GeV . Notwithstanding, axion solitonic cores could leave a detectable signature in the Lyman-α forest if the normalization of the unknown axion core—filament mass relation is ˜100 larger than it is for spherical halos. We hope our work motivates future numerical studies of the impact of axion

  3. Large-scale prediction of drug-target interactions using protein sequences and drug topological structures

    Energy Technology Data Exchange (ETDEWEB)

    Cao Dongsheng [Research Center of Modernization of Traditional Chinese Medicines, Central South University, Changsha 410083 (China); Liu Shao [Xiangya Hospital, Central South University, Changsha 410008 (China); Xu Qingsong [School of Mathematical Sciences and Computing Technology, Central South University, Changsha 410083 (China); Lu Hongmei; Huang Jianhua [Research Center of Modernization of Traditional Chinese Medicines, Central South University, Changsha 410083 (China); Hu Qiannan [Key Laboratory of Combinatorial Biosynthesis and Drug Discovery (Wuhan University), Ministry of Education, and Wuhan University School of Pharmaceutical Sciences, Wuhan 430071 (China); Liang Yizeng, E-mail: yizeng_liang@263.net [Research Center of Modernization of Traditional Chinese Medicines, Central South University, Changsha 410083 (China)

    2012-11-08

    Highlights: Black-Right-Pointing-Pointer Drug-target interactions are predicted using an extended SAR methodology. Black-Right-Pointing-Pointer A drug-target interaction is regarded as an event triggered by many factors. Black-Right-Pointing-Pointer Molecular fingerprint and CTD descriptors are used to represent drugs and proteins. Black-Right-Pointing-Pointer Our approach shows compatibility between the new scheme and current SAR methodology. - Abstract: The identification of interactions between drugs and target proteins plays a key role in the process of genomic drug discovery. It is both consuming and costly to determine drug-target interactions by experiments alone. Therefore, there is an urgent need to develop new in silico prediction approaches capable of identifying these potential drug-target interactions in a timely manner. In this article, we aim at extending current structure-activity relationship (SAR) methodology to fulfill such requirements. In some sense, a drug-target interaction can be regarded as an event or property triggered by many influence factors from drugs and target proteins. Thus, each interaction pair can be represented theoretically by using these factors which are based on the structural and physicochemical properties simultaneously from drugs and proteins. To realize this, drug molecules are encoded with MACCS substructure fingerings representing existence of certain functional groups or fragments; and proteins are encoded with some biochemical and physicochemical properties. Four classes of drug-target interaction networks in humans involving enzymes, ion channels, G-protein-coupled receptors (GPCRs) and nuclear receptors, are independently used for establishing predictive models with support vector machines (SVMs). The SVM models gave prediction accuracy of 90.31%, 88.91%, 84.68% and 83.74% for four datasets, respectively. In conclusion, the results demonstrate the ability of our proposed method to predict the drug

  4. Structurally controlled 'teleconnection' of large-scale mass wasting (Eastern Alps)

    Science.gov (United States)

    Ostermann, Marc; Sanders, Diethard

    2015-04-01

    In the Brenner Pass area (Eastern Alps) , closely ahead of the most northward outlier ('nose') of the Southern-Alpine continental indenter, abundant deep-seated gravitational slope deformations and a cluster of five post-glacial rockslides are present. The indenter of roughly triangular shape formed during Neogene collision of the Southern-Alpine basement with the Eastern-Alpine nappe stack. Compression by the indenter activated a N-S striking, roughly W-E extensional fault northward of the nose of the indenter (Brenner-normal fault; BNF), and lengthened the Eastern-Alpine edifice along a set of major strike-slip faults. These fault zones display high seismicity, and are the preferred locus of catastrophic rapid slope failures (rockslides, rock avalanches) and deep-seated gravitational slope deformations. The seismotectonic stress field, earthquake activity, and structural data all indicate that the South-Alpine indenter still - or again - exerts compression; in consequence, the northward adjacent Eastern Alps are subject mainly to extension and strike-slip. For the rockslides in the Brenner Pass area, and for the deep-seated gravitational slope deformations, the fault zones combined with high seismic activity predispose massive slope failures. Structural data and earthquakes mainly record ~W-E extension within an Eastern Alpine basement block (Oetztal-Stubai basement complex) in the hangingwall of the BNF. In the Northern Calcareous Alps NW of the Oetztal-Stubai basement complex, dextral faults provide defacement scars for large rockfalls and rockslides. Towards the West, these dextral faults merge into a NNW-SSE striking sinistral fault zone that, in turn, displays high seismic activity and is the locus of another rockslide cluster (Fern Pass cluster; Prager et al., 2008). By its kinematics dictated by the South-Alpine indenter, the relatively rigid Oetztal-Stubai basement block relays faulting and associated mass-wasting over a N-S distance of more than 60

  5. Teaching the Thrill of Discovery: Student Exploration of the Large-Scale Structures of the Universe

    Science.gov (United States)

    Juneau, Stephanie; Dey, Arjun; Walker, Constance E.; NOAO Data Lab

    2018-01-01

    In collaboration with the Teen Astronomy Cafes program, the NOAO Data Lab is developing online Jupyter Notebooks as a free and publicly accessible tool for students and teachers. Each interactive activity teaches students simultaneously about coding and astronomy with a focus on large datasets. Therefore, students learn state-of-the-art techniques at the cross-section between astronomy and data science. During the activity entitled “Our Vast Universe”, students use real spectroscopic data to measure the distance to galaxies before moving on to a catalog with distances to over 100,000 galaxies. Exploring this dataset gives students an appreciation of the large number of galaxies in the universe (2 trillion!), and leads them to discover how galaxies are located in large and impressive filamentary structures. During the Teen Astronomy Cafes program, the notebook is supplemented with visual material conducive to discussion, and hands-on activities involving cubes representing model universes. These steps contribute to build the students’ physical intuition and give them a better grasp of the concepts before using software and coding. At the end of the activity, students have made their own measurements, and have experienced scientific research directly. More information is available online for the Teen Astronomy Cafes (teensciencecafe.org/cafes) and the NOAO Data Lab (datalab.noao.edu).

  6. Concurrent Validity and Feasibility of Short Tests Currently Used to Measure Early Childhood Development in Large Scale Studies.

    Directory of Open Access Journals (Sweden)

    Marta Rubio-Codina

    Full Text Available In low- and middle-income countries (LIMCs, measuring early childhood development (ECD with standard tests in large scale surveys and evaluations of interventions is difficult and expensive. Multi-dimensional screeners and single-domain tests ('short tests' are frequently used as alternatives. However, their validity in these circumstances is unknown. We examined the feasibility, reliability, and concurrent validity of three multi-dimensional screeners (Ages and Stages Questionnaires (ASQ-3, Denver Developmental Screening Test (Denver-II, Battelle Developmental Inventory screener (BDI-2 and two single-domain tests (MacArthur-Bates Short-Forms (SFI and SFII, WHO Motor Milestones (WHO-Motor in 1,311 children 6-42 months in Bogota, Colombia. The scores were compared with those on the Bayley Scales of Infant and Toddler Development (Bayley-III, taken as the 'gold standard'. The Bayley-III was given at a center by psychologists; whereas the short tests were administered in the home by interviewers, as in a survey setting. Findings indicated good internal validity of all short tests except the ASQ-3. The BDI-2 took long to administer and was expensive, while the single-domain tests were quickest and cheapest and the Denver-II and ASQ-3 were intermediate. Concurrent validity of the multi-dimensional tests' cognitive, language, and fine motor scales with the corresponding Bayley-III scale was low below 19 months. However, it increased with age, becoming moderate-to-high over 30 months. In contrast, gross motor scales' concurrence was high under 19 months and then decreased. Of the single-domain tests, the WHO-Motor had high validity with gross motor under 16 months, and the SFI and SFII expressive scales showed moderate correlations with language under 30 months. Overall, the Denver-II was the most feasible and valid multi-dimensional test and the ASQ-3 performed poorly under 31 months. By domain, gross motor development had the highest concurrence

  7. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  8. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    International Nuclear Information System (INIS)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik

    2017-01-01

    Here, we present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution (HOD), the conditional luminosity function (CLF), abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos, or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. Here, the package has an optimized toolkit to make mock observations on a synthetic galaxy population, including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others, allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation.

  9. Large-scale prediction of drug–target interactions using protein sequences and drug topological structures

    International Nuclear Information System (INIS)

    Cao Dongsheng; Liu Shao; Xu Qingsong; Lu Hongmei; Huang Jianhua; Hu Qiannan; Liang Yizeng

    2012-01-01

    Highlights: ► Drug–target interactions are predicted using an extended SAR methodology. ► A drug–target interaction is regarded as an event triggered by many factors. ► Molecular fingerprint and CTD descriptors are used to represent drugs and proteins. ► Our approach shows compatibility between the new scheme and current SAR methodology. - Abstract: The identification of interactions between drugs and target proteins plays a key role in the process of genomic drug discovery. It is both consuming and costly to determine drug–target interactions by experiments alone. Therefore, there is an urgent need to develop new in silico prediction approaches capable of identifying these potential drug–target interactions in a timely manner. In this article, we aim at extending current structure–activity relationship (SAR) methodology to fulfill such requirements. In some sense, a drug–target interaction can be regarded as an event or property triggered by many influence factors from drugs and target proteins. Thus, each interaction pair can be represented theoretically by using these factors which are based on the structural and physicochemical properties simultaneously from drugs and proteins. To realize this, drug molecules are encoded with MACCS substructure fingerings representing existence of certain functional groups or fragments; and proteins are encoded with some biochemical and physicochemical properties. Four classes of drug–target interaction networks in humans involving enzymes, ion channels, G-protein-coupled receptors (GPCRs) and nuclear receptors, are independently used for establishing predictive models with support vector machines (SVMs). The SVM models gave prediction accuracy of 90.31%, 88.91%, 84.68% and 83.74% for four datasets, respectively. In conclusion, the results demonstrate the ability of our proposed method to predict the drug–target interactions, and show a general compatibility between the new scheme and current SAR

  10. Construction and testing of a large scale prototype of a silicon tungsten electromagnetic calorimeter for a future lepton collider

    International Nuclear Information System (INIS)

    Rouëné, Jérémy

    2013-01-01

    The CALICE collaboration is preparing large scale prototypes of highly granular calorimeters for detectors to be operated at a future linear electron positron collider. After several beam campaigns at DESY, CERN and FNAL, the CALICE collaboration has demonstrated the principle of highly granular electromagnetic calorimeters with a first prototype called physics prototype. The next prototype, called technological prototype, addresses the engineering challenges which come along with the realisation of highly granular calorimeters. This prototype will comprise 30 layers where each layer is composed of four 9×9 cm 2 silicon wafers. The front end electronics is integrated into the detector layers. The size of each pixel is 5×5 mm 2 . This prototype enters its construction phase. We present results of the first layers of the technological prototype obtained during beam test campaigns in spring and summer 2012. According to these results the signal over noise ratio of the detector exceeds the R and D goal of 10:1

  11. Large-scale hydraulic structure of a seismogenic fault at 10 km depth (Gole Larghe Fault Zone, Italian Southern Alps)

    Science.gov (United States)

    Bistacchi, Andrea; Di Toro, Giulio; Smith, Steve; Mittempergher, Silvia; Garofalo, Paolo

    2014-05-01

    The definition of hydraulic properties of fault zones is a major issue in structural geology, seismology, and in several applications (hydrocarbons, hydrogeology, CO2 sequestration, etc.). The permeability of fault rocks can be measured in laboratory experiments, but its upscaling to large-scale structures is not straightforward. For instance, typical permeability of fine-grained fault rock samples is in the 10-18-10-20 m2 range, but, according to seismological estimates, the large-scale permeability of active fault zones can be as high as 10-10 m2. Solving this issue is difficult because in-situ measurements of large-scale permeability have been carried out just at relatively shallow depths - mainly in oil wells and exceptionally in active tectonic settings (e.g. SAFOD at 3 km), whilst deeper experiments have been performed only in the stable continental crust (e.g. KTB at 9 km). In this study, we apply discrete fracture-network (DFN) modelling techniques developed for shallow aquifers (mainly in nuclear waste storage projects like Yucca Mountain) and in the oil industry, in order to model the hydraulic structure of the Gole Larghe Fault Zone (GLFZ, Italian Southern Alps). This fault, now exposed in world-class glacier-polished outcrops, has been exhumed from ca. 8 km, where it was characterized by a well-documented seismic activity, but also by hydrous fluid flow evidenced by alteration halos and precipitation of hydrothermal minerals in veins and along cataclasites. The GLFZ does not show a classical seal structure that in other fault zones corresponds to a core zone characterized by fine-grained fault rocks. However, permeability is heterogeneous and the permeability tensor is strongly anisotropic due to fracture preferential orientation. We will show with numerical experiments that this hydraulic structure results in a channelized fluid flow (which is consistent with the observed hydrothermal alteration pattern). This results in a counterintuitive situation

  12. LYα FOREST TOMOGRAPHY FROM BACKGROUND GALAXIES: THE FIRST MEGAPARSEC-RESOLUTION LARGE-SCALE STRUCTURE MAP AT z > 2

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Khee-Gan; Hennawi, Joseph F.; Eilers, Anna-Christina [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Stark, Casey; White, Martin [Department of Astronomy, University of California at Berkeley, B-20 Hearst Field Annex 3411, Berkeley, CA 94720 (United States); Prochaska, J. Xavier [Department of Astronomy and Astrophysics, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); Schlegel, David J. [University of California Observatories, Lick Observatory, 1156 High Street, Santa Cruz, CA 95064 (United States); Arinyo-i-Prats, Andreu [Institut de Ciències del Cosmos, Universitat de Barcelona (IEEC-UB), Martí Franquès 1, E-08028 Barcelona (Spain); Suzuki, Nao [Kavli Institute for the Physics and Mathematics of the Universe (IPMU), The University of Tokyo, Kashiwano-ha 5-1-5, Kashiwa-shi, Chiba (Japan); Croft, Rupert A. C. [Department of Physics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Caputi, Karina I. [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700-AV Groningen (Netherlands); Cassata, Paolo [Instituto de Fisica y Astronomia, Facultad de Ciencias, Universidad de Valparaiso, Av. Gran Bretana 1111, Casilla 5030, Valparaiso (Chile); Ilbert, Olivier; Le Brun, Vincent; Le Fèvre, Olivier [Aix Marseille Université, CNRS, LAM (Laboratoire d' Astrophysique de Marseille) UMR 7326, F-13388 Marseille (France); Garilli, Bianca [INAF-IASF, Via Bassini 15, I-20133, Milano (Italy); Koekemoer, Anton M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Maccagni, Dario [INAF-Osservatorio Astronomico di Bologna, Via Ranzani,1, I-40127 Bologna (Italy); Nugent, Peter, E-mail: lee@mpia.de [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); and others

    2014-11-01

    We present the first observations of foreground Lyα forest absorption from high-redshift galaxies, targeting 24 star-forming galaxies (SFGs) with z ∼ 2.3-2.8 within a 5' × 14' region of the COSMOS field. The transverse sightline separation is ∼2 h {sup –1} Mpc comoving, allowing us to create a tomographic reconstruction of the three-dimensional (3D) Lyα forest absorption field over the redshift range 2.20 ≤ z ≤ 2.45. The resulting map covers 6 h {sup –1} Mpc × 14 h {sup –1} Mpc in the transverse plane and 230 h {sup –1} Mpc along the line of sight with a spatial resolution of ≈3.5 h {sup –1} Mpc, and is the first high-fidelity map of a large-scale structure on ∼Mpc scales at z > 2. Our map reveals significant structures with ≳ 10 h {sup –1} Mpc extent, including several spanning the entire transverse breadth, providing qualitative evidence for the filamentary structures predicted to exist in the high-redshift cosmic web. Simulated reconstructions with the same sightline sampling, spectral resolution, and signal-to-noise ratio recover the salient structures present in the underlying 3D absorption fields. Using data from other surveys, we identified 18 galaxies with known redshifts coeval with our map volume, enabling a direct comparison with our tomographic map. This shows that galaxies preferentially occupy high-density regions, in qualitative agreement with the same comparison applied to simulations. Our results establish the feasibility of the CLAMATO survey, which aims to obtain Lyα forest spectra for ∼1000 SFGs over ∼1 deg{sup 2} of the COSMOS field, in order to map out the intergalactic medium large-scale structure at (z) ∼ 2.3 over a large volume (100 h {sup –1} Mpc){sup 3}.

  13. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library.

    Science.gov (United States)

    Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi

    2017-10-10

    We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.

  14. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global

  15. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in

  16. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed

  17. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    International Nuclear Information System (INIS)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.; Choi, Yun-Young; Kim, Juhan; Kim, Sungsoo S.; Speare, Robert; Brownstein, Joel R.; Brinkmann, J.

    2014-01-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M stellar > 10 11.56 M ☉ . We study the topology at two smoothing lengths: R G = 21 h –1 Mpc and R G = 34 h –1 Mpc. The genus topology studied at the R G = 21 h –1 Mpc scale results in the highest genus amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.

  18. Scramjet test flow reconstruction for a large-scale expansion tube, Part 1: quasi-one-dimensional modelling

    Science.gov (United States)

    Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.

    2017-11-01

    Large-scale free-piston driven expansion tubes have uniquely high total pressure capabilities which make them an important resource for development of access-to-space scramjet engine technology. However, many aspects of their operation are complex, and their test flows are fundamentally unsteady and difficult to measure. While computational fluid dynamics methods provide an important tool for quantifying these flows, these calculations become very expensive with increasing facility size and therefore have to be carefully constructed to ensure sufficient accuracy is achieved within feasible computational times. This study examines modelling strategies for a Mach 10 scramjet test condition developed for The University of Queensland's X3 facility. The present paper outlines the challenges associated with test flow reconstruction, describes the experimental set-up for the X3 experiments, and then details the development of an experimentally tuned quasi-one-dimensional CFD model of the full facility. The 1-D model, which accurately captures longitudinal wave processes, is used to calculate the transient flow history in the shock tube. This becomes the inflow to a higher-fidelity 2-D axisymmetric simulation of the downstream facility, detailed in the Part 2 companion paper, leading to a validated, fully defined nozzle exit test flow.

  19. Scramjet test flow reconstruction for a large-scale expansion tube, Part 1: quasi-one-dimensional modelling

    Science.gov (United States)

    Gildfind, D. E.; Jacobs, P. A.; Morgan, R. G.; Chan, W. Y. K.; Gollan, R. J.

    2018-07-01

    Large-scale free-piston driven expansion tubes have uniquely high total pressure capabilities which make them an important resource for development of access-to-space scramjet engine technology. However, many aspects of their operation are complex, and their test flows are fundamentally unsteady and difficult to measure. While computational fluid dynamics methods provide an important tool for quantifying these flows, these calculations become very expensive with increasing facility size and therefore have to be carefully constructed to ensure sufficient accuracy is achieved within feasible computational times. This study examines modelling strategies for a Mach 10 scramjet test condition developed for The University of Queensland's X3 facility. The present paper outlines the challenges associated with test flow reconstruction, describes the experimental set-up for the X3 experiments, and then details the development of an experimentally tuned quasi-one-dimensional CFD model of the full facility. The 1-D model, which accurately captures longitudinal wave processes, is used to calculate the transient flow history in the shock tube. This becomes the inflow to a higher-fidelity 2-D axisymmetric simulation of the downstream facility, detailed in the Part 2 companion paper, leading to a validated, fully defined nozzle exit test flow.

  20. Radiography with cosmic-ray and compact accelerator muons; Exploring inner-structure of large-scale objects and landforms.

    Science.gov (United States)

    Nagamine, Kanetada

    2016-01-01

    Cosmic-ray muons (CRM) arriving from the sky on the surface of the earth are now known to be used as radiography purposes to explore the inner-structure of large-scale objects and landforms, ranging in thickness from meter to kilometers scale, such as volcanic mountains, blast furnaces, nuclear reactors etc. At the same time, by using muons produced by compact accelerators (CAM), advanced radiography can be realized for objects with a thickness in the sub-millimeter to meter range, with additional exploration capability such as element identification and bio-chemical analysis. In the present report, principles, methods and specific research examples of CRM transmission radiography are summarized after which, principles, methods and perspective views of the future CAM radiography are described.

  1. Large-scale structure of the Taurus molecular complex. II. Analysis of velocity fluctuations and turbulence. III. Methods for turbulence

    International Nuclear Information System (INIS)

    Kleiner, S.C.; Dickman, R.L.

    1985-01-01

    The velocity autocorrelation function (ACF) of observed spectral line centroid fluctuations is noted to effectively reproduce the actual ACF of turbulent gas motions within an interstellar cloud, thereby furnishing a framework for the study of the large scale velocity structure of the Taurus dark cloud complex traced by the present C-13O J = 1-0 observations of this region. The results obtained are discussed in the context of recent suggestions that widely observed correlations between molecular cloud widths and cloud sizes indicate the presence of a continuum of turbulent motions within the dense interstellar medium. Attention is then given to a method for the quantitative study of these turbulent motions, involving the mapping of a source in an optically thin spectral line and studying the spatial correlation properties of the resulting velocity centroid map. 61 references

  2. Large-scale micromagnetic simulation of Nd-Fe-B sintered magnets with Dy-rich shell structures

    Directory of Open Access Journals (Sweden)

    T. Oikawa

    2016-05-01

    Full Text Available Large-scale micromagnetic simulations have been performed using the energy minimization method on a model with structural features similar to those of Dy grain boundary diffusion (GBD-processed sintered magnets. Coercivity increases as a linear function of the anisotropy field of the Dy-rich shell, which is independent of Dy composition in the core as long as the shell thickness is greater than about 15 nm. This result shows that the Dy contained in the initial sintered magnets prior to the GBD process is not essential for enhancing coercivity. Magnetization reversal patterns indicate that coercivity is strongly influenced by domain wall pinning at the grain boundary. This observation is found to be consistent with the one-dimensional pinning theory.

  3. Large-scale structural and textual similarity-based mining of knowledge graph to predict drug-drug interactions

    KAUST Repository

    Abdelaziz, Ibrahim; Fokoue, Achille; Hassanzadeh, Oktie; Zhang, Ping; Sadoghi, Mohammad

    2017-01-01

    Drug-Drug Interactions (DDIs) are a major cause of preventable Adverse Drug Reactions (ADRs), causing a significant burden on the patients’ health and the healthcare system. It is widely known that clinical studies cannot sufficiently and accurately identify DDIs for new drugs before they are made available on the market. In addition, existing public and proprietary sources of DDI information are known to be incomplete and/or inaccurate and so not reliable. As a result, there is an emerging body of research on in-silico prediction of drug-drug interactions. In this paper, we present Tiresias, a large-scale similarity-based framework that predicts DDIs through link prediction. Tiresias takes in various sources of drug-related data and knowledge as inputs, and provides DDI predictions as outputs. The process starts with semantic integration of the input data that results in a knowledge graph describing drug attributes and relationships with various related entities such as enzymes, chemical structures, and pathways. The knowledge graph is then used to compute several similarity measures between all the drugs in a scalable and distributed framework. In particular, Tiresias utilizes two classes of features in a knowledge graph: local and global features. Local features are derived from the information directly associated to each drug (i.e., one hop away) while global features are learnt by minimizing a global loss function that considers the complete structure of the knowledge graph. The resulting similarity metrics are used to build features for a large-scale logistic regression model to predict potential DDIs. We highlight the novelty of our proposed Tiresias and perform thorough evaluation of the quality of the predictions. The results show the effectiveness of Tiresias in both predicting new interactions among existing drugs as well as newly developed drugs.

  4. Large-scale structural and textual similarity-based mining of knowledge graph to predict drug-drug interactions

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-06-12

    Drug-Drug Interactions (DDIs) are a major cause of preventable Adverse Drug Reactions (ADRs), causing a significant burden on the patients’ health and the healthcare system. It is widely known that clinical studies cannot sufficiently and accurately identify DDIs for new drugs before they are made available on the market. In addition, existing public and proprietary sources of DDI information are known to be incomplete and/or inaccurate and so not reliable. As a result, there is an emerging body of research on in-silico prediction of drug-drug interactions. In this paper, we present Tiresias, a large-scale similarity-based framework that predicts DDIs through link prediction. Tiresias takes in various sources of drug-related data and knowledge as inputs, and provides DDI predictions as outputs. The process starts with semantic integration of the input data that results in a knowledge graph describing drug attributes and relationships with various related entities such as enzymes, chemical structures, and pathways. The knowledge graph is then used to compute several similarity measures between all the drugs in a scalable and distributed framework. In particular, Tiresias utilizes two classes of features in a knowledge graph: local and global features. Local features are derived from the information directly associated to each drug (i.e., one hop away) while global features are learnt by minimizing a global loss function that considers the complete structure of the knowledge graph. The resulting similarity metrics are used to build features for a large-scale logistic regression model to predict potential DDIs. We highlight the novelty of our proposed Tiresias and perform thorough evaluation of the quality of the predictions. The results show the effectiveness of Tiresias in both predicting new interactions among existing drugs as well as newly developed drugs.

  5. ROSA-V large scale test facility (LSTF) system description for the third and fourth simulated fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Mitsuhiro; Nakamura, Hideo; Ohtsu, Iwao [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [and others

    2003-03-01

    The Large Scale Test Facility (LSTF) is a full-height and 1/48 volumetrically scaled test facility of the Japan Atomic Energy Research Institute (JAERI) for system integral experiments simulating the thermal-hydraulic responses at full-pressure conditions of a 1100 MWe-class pressurized water reactor (PWR) during small break loss-of-coolant accidents (SBLOCAs) and other transients. The LSTF can also simulate well a next-generation type PWR such as the AP600 reactor. In the fifth phase of the Rig-of-Safety Assessment (ROSA-V) Program, eighty nine experiments have been conducted at the LSTF with the third simulated fuel assembly until June 2001, and five experiments have been conducted with the newly-installed fourth simulated fuel assembly until December 2002. In the ROSA-V program, various system integral experiments have been conducted to certify effectiveness of both accident management (AM) measures in beyond design basis accidents (BDBAs) and improved safety systems in the next-generation reactors. In addition, various separate-effect tests have been conducted to verify and develop computer codes and analytical models to predict non-homogeneous and multi-dimensional phenomena such as heat transfer across the steam generator U-tubes under the presence of non-condensable gases in both current and next-generation reactors. This report presents detailed information of the LSTF system with the third and fourth simulated fuel assemblies for the aid of experiment planning and analyses of experiment results. (author)

  6. Large scale centrifuge test of a geomembrane-lined landfill subject to waste settlement and seismic loading.

    Science.gov (United States)

    Kavazanjian, Edward; Gutierrez, Angel

    2017-10-01

    A large scale centrifuge test of a geomembrane-lined landfill subject to waste settlement and seismic loading was conducted to help validate a numerical model for performance based design of geomembrane liner systems. The test was conducted using the 240g-ton centrifuge at the University of California at Davis under the U.S. National Science Foundation Network for Earthquake Engineering Simulation Research (NEESR) program. A 0.05mm thin film membrane was used to model the liner. The waste was modeled using a peat-sand mixture. The side slope membrane was underlain by lubricated low density polyethylene to maximize the difference between the interface shear strength on the top and bottom of the geomembrane and the induced tension in it. Instrumentation included thin film strain gages to monitor geomembrane strains and accelerometers to monitor seismic excitation. The model was subjected to an input design motion intended to simulate strong ground motion from the 1994 Hyogo-ken Nanbu earthquake. Results indicate that downdrag waste settlement and seismic loading together, and possibly each phenomenon individually, can induce potentially damaging tensile strains in geomembrane liners. The data collected from this test is publically available and can be used to validate numerical models for the performance of geomembrane liner systems. Published by Elsevier Ltd.

  7. GAT: a graph-theoretical analysis toolbox for analyzing between-group differences in large-scale structural and functional brain networks.

    Science.gov (United States)

    Hosseini, S M Hadi; Hoeft, Fumiko; Kesler, Shelli R

    2012-01-01

    In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT) that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI) that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC) and functional data analyses (FDA), in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL) and healthy matched Controls (CON). The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.

  8. GAT: a graph-theoretical analysis toolbox for analyzing between-group differences in large-scale structural and functional brain networks.

    Directory of Open Access Journals (Sweden)

    S M Hadi Hosseini

    Full Text Available In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC and functional data analyses (FDA, in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL and healthy matched Controls (CON. The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.

  9. Test of Gravity on Large Scales with Weak Gravitational Lensing and Clustering Measurements of SDSS Luminous Red Galaxies

    Science.gov (United States)

    Reyes, Reinabelle; Mandelbaum, R.; Seljak, U.; Gunn, J.; Lombriser, L.

    2009-01-01

    We perform a test of gravity on large scales (5-50 Mpc/h) using 70,000 luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS) DR7 with redshifts 0.16test in future galaxy surveys such as LSST, for which a very high signal-to-noise measurement will be possible.

  10. Multi-parameter decoupling and slope tracking control strategy of a large-scale high altitude environment simulation test cabin

    Directory of Open Access Journals (Sweden)

    Li Ke

    2014-12-01

    Full Text Available A large-scale high altitude environment simulation test cabin was developed to accurately control temperatures and pressures encountered at high altitudes. The system was developed to provide slope-tracking dynamic control of the temperature–pressure two-parameter and overcome the control difficulties inherent to a large inertia lag link with a complex control system which is composed of turbine refrigeration device, vacuum device and liquid nitrogen cooling device. The system includes multi-parameter decoupling of the cabin itself to avoid equipment damage of air refrigeration turbine caused by improper operation. Based on analysis of the dynamic characteristics and modeling for variations in temperature, pressure and rotation speed, an intelligent controller was implemented that includes decoupling and fuzzy arithmetic combined with an expert PID controller to control test parameters by decoupling and slope tracking control strategy. The control system employed centralized management in an open industrial ethernet architecture with an industrial computer at the core. The simulation and field debugging and running results show that this method can solve the problems of a poor anti-interference performance typical for a conventional PID and overshooting that can readily damage equipment. The steady-state characteristics meet the system requirements.

  11. Two-Level Chebyshev Filter Based Complementary Subspace Method: Pushing the Envelope of Large-Scale Electronic Structure Calculations.

    Science.gov (United States)

    Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E

    2018-06-12

    We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).

  12. The XChemExplorer graphical workflow tool for routine or large-scale protein–ligand structure determination

    Science.gov (United States)

    Krojer, Tobias; Talon, Romain; Pearce, Nicholas; Douangamath, Alice; Brandao-Neto, Jose; Dias, Alexandre; Marsden, Brian

    2017-01-01

    XChemExplorer (XCE) is a data-management and workflow tool to support large-scale simultaneous analysis of protein–ligand complexes during structure-based ligand discovery (SBLD). The user interfaces of established crystallo­graphic software packages such as CCP4 [Winn et al. (2011 ▸), Acta Cryst. D67, 235–242] or PHENIX [Adams et al. (2010 ▸), Acta Cryst. D66, 213–221] have entrenched the paradigm that a ‘project’ is concerned with solving one structure. This does not hold for SBLD, where many almost identical structures need to be solved and analysed quickly in one batch of work. Functionality to track progress and annotate structures is essential. XCE provides an intuitive graphical user interface which guides the user from data processing, initial map calculation, ligand identification and refinement up until data dissemination. It provides multiple entry points depending on the need of each project, enables batch processing of multiple data sets and records metadata, progress and annotations in an SQLite database. XCE is freely available and works on any Linux and Mac OS X system, and the only dependency is to have the latest version of CCP4 installed. The design and usage of this tool are described here, and its usefulness is demonstrated in the context of fragment-screening campaigns at the Diamond Light Source. It is routinely used to analyse projects comprising 1000 data sets or more, and therefore scales well to even very large ligand-design projects. PMID:28291762

  13. The XChemExplorer graphical workflow tool for routine or large-scale protein-ligand structure determination.

    Science.gov (United States)

    Krojer, Tobias; Talon, Romain; Pearce, Nicholas; Collins, Patrick; Douangamath, Alice; Brandao-Neto, Jose; Dias, Alexandre; Marsden, Brian; von Delft, Frank

    2017-03-01

    XChemExplorer (XCE) is a data-management and workflow tool to support large-scale simultaneous analysis of protein-ligand complexes during structure-based ligand discovery (SBLD). The user interfaces of established crystallographic software packages such as CCP4 [Winn et al. (2011), Acta Cryst. D67, 235-242] or PHENIX [Adams et al. (2010), Acta Cryst. D66, 213-221] have entrenched the paradigm that a `project' is concerned with solving one structure. This does not hold for SBLD, where many almost identical structures need to be solved and analysed quickly in one batch of work. Functionality to track progress and annotate structures is essential. XCE provides an intuitive graphical user interface which guides the user from data processing, initial map calculation, ligand identification and refinement up until data dissemination. It provides multiple entry points depending on the need of each project, enables batch processing of multiple data sets and records metadata, progress and annotations in an SQLite database. XCE is freely available and works on any Linux and Mac OS X system, and the only dependency is to have the latest version of CCP4 installed. The design and usage of this tool are described here, and its usefulness is demonstrated in the context of fragment-screening campaigns at the Diamond Light Source. It is routinely used to analyse projects comprising 1000 data sets or more, and therefore scales well to even very large ligand-design projects.

  14. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  15. Testing on a Large Scale Running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Höcker, A; Hughes-Jones, R E; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Leahu, L; Leahu, M; Lehmann-Miotto, G; Le Vine, M J; Liu, W; Maeno, T; Männer, R; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Müller, M; Garcia-Murillo, R; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Albuquerque-Portes, M; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Sole-Segura, E; Seixas, M; Sloper, J; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Ünel, G; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; von der Schmitt, H; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  16. Testing on a Large Scale running the ATLAS Data Acquisition and High Level Trigger Software on 700 PC Nodes

    CERN Document Server

    Burckhart-Chromek, Doris; Adragna, P; Albuquerque-Portes, M; Alexandrov, L; Amorim, A; Armstrong, S; Badescu, E; Baines, J T M; Barros, N; Beck, H P; Bee, C; Blair, R; Bogaerts, J A C; Bold, T; Bosman, M; Caprini, M; Caramarcu, C; Ciobotaru, M; Comune, G; Corso-Radu, A; Cranfield, R; Crone, G; Dawson, J; Della Pietra, M; Di Mattia, A; Dobinson, Robert W; Dobson, M; Dos Anjos, A; Dotti, A; Drake, G; Ellis, Nick; Ermoline, Y; Ertorer, E; Falciano, S; Ferrari, R; Ferrer, M L; Francis, D; Gadomski, S; Gameiro, S; Garcia-Murillo, R; Garitaonandia, H; Gaudio, G; George, S; Gesualdi-Mello, A; Gorini, B; Green, B; Haas, S; Haberichter, W N; Hadavand, H; Haeberli, C; Haller, J; Hansen, J; Hauser, R; Hillier, S J; Hughes-Jones, R E; Höcker, A; Joos, M; Kazarov, A; Kieft, G; Klous, S; Kohno, T; Kolos, S; Korcyl, K; Kordas, K; Kotov, V; Kugel, A; Landon, M; Lankford, A; Le Vine, M J; Leahu, L; Leahu, M; Lehmann-Miotto, G; Liu, W; Maeno, T; Mapelli, L; Martin, B; Masik, J; McLaren, R; Meessen, C; Meirosu, C; Mineev, M; Misiejuk, A; Morettini, P; Mornacchi, G; Männer, R; Müller, M; Nagasaka, Y; Negri, A; Padilla, C; Pasqualucci, E; Pauly, T; Perera, V; Petersen, J; Pope, B; Pretzl, K; Prigent, D; Roda, C; Ryabov, Yu; Salvatore, D; Schiavi, C; Schlereth, J L; Scholtes, I; Seixas, M; Sloper, J; Sole-Segura, E; Soloviev, I; Spiwoks, R; Stamen, R; Stancu, S; Strong, S; Sushkov, S; Szymocha, T; Tapprogge, S; Teixeira-Dias, P; Torres, R; Touchard, F; Tremblet, L; Van Wasen, J; Vandelli, W; Vaz-Gil-Lopes, L; Vermeulen, J C; Wengler, T; Werner, P; Wheeler, S; Wickens, F; Wiedenmann, W; Wiesmann, M; Wu, X; Yasu, Y; Yu, M; Zema, F; Zobernig, H; von der Schmitt, H; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The ATLAS Data Acquisition (DAQ) and High Level Trigger (HLT) software system will be comprised initially of 2000 PC nodes which take part in the control, event readout, second level trigger and event filter operations. This high number of PCs will only be purchased before data taking in 2007. The large CERN IT LXBATCH facility provided the opportunity to run in July 2005 online functionality tests over a period of 5 weeks on a stepwise increasing farm size from 100 up to 700 PC dual nodes. The interplay between the control and monitoring software with the event readout, event building and the trigger software has been exercised the first time as an integrated system on this large scale. New was also to run algorithms in the online environment for the trigger selection and in the event filter processing tasks on a larger scale. A mechanism has been developed to package the offline software together with the DAQ/HLT software and to distribute it via peer-to-peer software efficiently to this large pc cluster. T...

  17. Absolute pitch among students at the Shanghai Conservatory of Music: a large-scale direct-test study.

    Science.gov (United States)

    Deutsch, Diana; Li, Xiaonuo; Shen, Jing

    2013-11-01

    This paper reports a large-scale direct-test study of absolute pitch (AP) in students at the Shanghai Conservatory of Music. Overall note-naming scores were very high, with high scores correlating positively with early onset of musical training. Students who had begun training at age ≤5 yr scored 83% correct not allowing for semitone errors and 90% correct allowing for semitone errors. Performance levels were higher for white key pitches than for black key pitches. This effect was greater for orchestral performers than for pianists, indicating that it cannot be attributed to early training on the piano. Rather, accuracy in identifying notes of different names (C, C#, D, etc.) correlated with their frequency of occurrence in a large sample of music taken from the Western tonal repertoire. There was also an effect of pitch range, so that performance on tones in the two-octave range beginning on Middle C was higher than on tones in the octave below Middle C. In addition, semitone errors tended to be on the sharp side. The evidence also ran counter to the hypothesis, previously advanced by others, that the note A plays a special role in pitch identification judgments.

  18. Comparison of fracture toughness values from large-scale pipe system tests and C(T) specimens

    International Nuclear Information System (INIS)

    Olson, R.; Scott, P.; Marschall, C.; Wilkowski, G.

    1993-01-01

    Within the International Piping Integrity Research Group (IPIRG) program, pipe system experiments involving dynamic loading with intentionally circumferentially cracked pipe were conducted. The pipe system was fabricated from 406-mm (16-inch) diameter Schedule 100 pipe and the experiments were conducted at 15.5 MPa (2,250 psi) and 288 C (550 F). The loads consisted of pressure, dead-weight, thermal expansion, inertia, and dynamic anchor motion. Significant instrumentation was used to allow the material fracture resistance to be calculated from these large-scale experiments. A comparison of the toughness values from the stainless steel base metal pipe experiment of standard quasi-static and dynamic C(T) specimen tests showed the pipe toughness value was significantly lower than that obtained from C(T) specimens. It is hypothesized that the cyclic loading from inertial stresses in this pipe system experiment caused local degradation of the material toughness. Such effects are not considered in current LBB or pipe flaw evaluation criteria. 4 refs., 14 figs., 1 tab

  19. Fluid-structure interaction simulation of floating structures interacting with complex, large-scale ocean waves and atmospheric turbulence with application to floating offshore wind turbines

    Science.gov (United States)

    Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis

    2018-02-01

    We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.

  20. Hierarchical ZnO microspheres built by sheet-like network: Large-scale synthesis and structurally enhanced catalytic performances

    International Nuclear Information System (INIS)

    Zhu Guoxing; Liu Yuanjun; Ji Zhenyuan; Bai Song; Shen Xiaoping; Xu Zheng

    2012-01-01

    Highlights: ► Hierarchical ZnO microspheres were prepared through a facile precursor procedure in the absence of self-assembled templates, organic additives, or matrices. ► The building blocks of microspheres, sheet-like ZnO networks, are porous mesocrystal terminated with (0 1 −1 0) crystal planes. ► The hierarchical ZnO microsphere catalyst exhibits structure-induced enhancement of catalytic performance and a strong durability. - Abstract: Large-scale novel hierarchical ZnO microspheres were fabricated by a facile precursor procedure in the absence of self-assembled templates, organic additives, or matrices. A field emission scanning electron microscopy (FESEM) image reveals that the ZnO microspheres with diameter of 5–18 μm are built by sheet-like ZnO networks with average thickness of 40 nm and length of several microns. High resolution transmission electron microscopy (HRTEM) image indicates that the building blocks, sheet-like ZnO networks, are porous mesocrystal terminated with {0 1 −1 0} crystal planes. A potential application of the ZnO microspheres as a catalyst in the synthesis of 5-substituted 1H-tetrazoles was investigated. It was found that the hierarchical ZnO microsphere catalyst exhibits structure-induced enhancement of catalytic performance and a strong durability.

  1. Large-scale vortex structures and local heat release in lean turbulent swirling jet-flames under vortex breakdown conditions

    Science.gov (United States)

    Chikishev, Leonid; Lobasov, Aleksei; Sharaborin, Dmitriy; Markovich, Dmitriy; Dulin, Vladimir; Hanjalic, Kemal

    2017-11-01

    We investigate flame-flow interactions in an atmospheric turbulent high-swirl methane/air lean jet-flame at Re from 5,000 to 10,000 and equivalence ratio below 0.75 at the conditions of vortex breakdown. The focus is on the spatial correlation between the propagation of large-scale vortex structures, including precessing vortex core, and the variations of the local heat release. The measurements are performed by planar laser-induced fluorescence of hydroxyl and formaldehyde, applied simultaneously with the stereoscopic particle image velocimetry technique. The data are processed by the proper orthogonal decomposition. The swirl rate exceeded critical value for the vortex breakdown resulting in the formation of a processing vortex core and secondary helical vortex filaments that dominate the unsteady flow dynamics both of the non-reacting and reacting jet flows. The flame front is located in the inner mixing layer between the recirculation zone and the annular swirling jet. A pair of helical vortex structures, surrounding the flame, stretch it and cause local flame extinction before the flame is blown away. This work is supported by Russian Science Foundation (Grant No 16-19-10566).

  2. Use of Large-Scale Multi-Configuration EMI Measurements to Characterize Subsurface Structures of the Vadose Zone.

    Science.gov (United States)

    Huisman, J. A.; Brogi, C.; Pätzold, S.; Weihermueller, L.; von Hebel, C.; Van Der Kruk, J.; Vereecken, H.

    2017-12-01

    Subsurface structures of the vadose zone can play a key role in crop yield potential, especially during water stress periods. Geophysical techniques like electromagnetic induction EMI can provide information about dominant shallow subsurface features. However, previous studies with EMI have typically not reached beyond the field scale. We used high-resolution large-scale multi-configuration EMI measurements to characterize patterns of soil structural organization (layering and texture) and their impact on crop productivity at the km2 scale. We collected EMI data on an agricultural area of 1 km2 (102 ha) near Selhausen (NRW, Germany). The area consists of 51 agricultural fields cropped in rotation. Therefore, measurements were collected between April and December 2016, preferably within few days after the harvest. EMI data were automatically filtered, temperature corrected, and interpolated onto a common grid of 1 m resolution. Inspecting the ECa maps, we identified three main sub-areas with different subsurface heterogeneity. We also identified small-scale geomorphological structures as well as anthropogenic activities such as soil management and buried drainage networks. To identify areas with similar subsurface structures, we applied image classification techniques. We fused ECa maps obtained with different coil distances in a multiband image and applied supervised and unsupervised classification methodologies. Both showed good results in reconstructing observed patterns in plant productivity and the subsurface structures associated with them. However, the supervised methodology proved more efficient in classifying the whole study area. In a second step, we selected hundred locations within the study area and obtained a soil profile description with type, depth, and thickness of the soil horizons. Using this ground truth data it was possible to assign a typical soil profile to each of the main classes obtained from the classification. The proposed methodology was

  3. DEVELOPMENT AND ADAPTATION OF VORTEX REALIZABLE MEASUREMENT SYSTEM FOR BENCHMARK TEST WITH LARGE SCALE MODEL OF NUCLEAR REACTOR

    Directory of Open Access Journals (Sweden)

    S. M. Dmitriev

    2017-01-01

    Full Text Available The last decades development of applied calculation methods of nuclear reactor thermal and hydraulic processes are marked by the rapid growth of the High Performance Computing (HPC, which contribute to the active introduction of Computational Fluid Dynamics (CFD. The use of such programs to justify technical and economic parameters and especially the safety of nuclear reactors requires comprehensive verification of mathematical models and CFD programs. The aim of the work was the development and adaptation of a measuring system having the characteristics necessary for its application in the verification test (experimental facility. It’s main objective is to study the processes of coolant flow mixing with different physical properties (for example, the concentration of dissolved impurities inside a large-scale reactor model. The basic method used for registration of the spatial concentration field in the mixing area is the method of spatial conductometry. In the course of the work, a measurement complex, including spatial conductometric sensors, a system of secondary converters and software, was created. Methods of calibration and normalization of measurement results are developed. Averaged concentration fields, nonstationary realizations of the measured local conductivity were obtained during the first experimental series, spectral and statistical analysis of the realizations were carried out.The acquired data are compared with pretest CFD-calculations performed in the ANSYS CFX program. A joint analysis of the obtained results made it possible to identify the main regularities of the process under study, and to demonstrate the capabilities of the designed measuring system to receive the experimental data of the «CFD-quality» required for verification.The carried out adaptation of spatial sensors allows to conduct a more extensive program of experimental tests, on the basis of which a databank and necessary generalizations will be created

  4. Large-scale laboratory testing of bedload-monitoring technologies: overview of the StreamLab06 Experiments

    Science.gov (United States)

    Marr, Jeffrey D.G.; Gray, John R.; Davis, Broderick E.; Ellis, Chris; Johnson, Sara; Gray, John R.; Laronne, Jonathan B.; Marr, Jeffrey D.G.

    2010-01-01

    A 3-month-long, large-scale flume experiment involving research and testing of selected conventional and surrogate bedload-monitoring technologies was conducted in the Main Channel at the St. Anthony Falls Laboratory under the auspices of the National Center for Earth-surface Dynamics. These experiments, dubbed StreamLab06, involved 25 researchers and volunteers from academia, government, and the private sector. The research channel was equipped with a sediment-recirculation system and a sediment-flux monitoring system that allowed continuous measurement of sediment flux in the flume and provided a data set by which samplers were evaluated. Selected bedload-measurement technologies were tested under a range of flow and sediment-transport conditions. The experiment was conducted in two phases. The bed material in phase I was well-sorted siliceous sand (0.6-1.8 mm median diameter). A gravel mixture (1-32 mm median diameter) composed the bed material in phase II. Four conventional bedload samplers – a standard Helley-Smith, Elwha, BLH-84, and Toutle River II (TR-2) sampler – were manually deployed as part of both experiment phases. Bedload traps were deployed in study Phase II. Two surrogate bedload samplers – stationarymounted down-looking 600 kHz and 1200 kHz acoustic Doppler current profilers – were deployed in experiment phase II. This paper presents an overview of the experiment including the specific data-collection technologies used and the ambient hydraulic, sediment-transport and environmental conditions measured as part of the experiment. All data collected as part of the StreamLab06 experiments are, or will be available to the research community.

  5. Information contained within the large scale gas injection test (Lasgit) dataset exposed using a bespoke data analysis tool-kit

    International Nuclear Information System (INIS)

    Bennett, D.P.; Thomas, H.R.; Cuss, R.J.; Harrington, J.F.; Vardon, P.J.

    2012-01-01

    Document available in extended abstract form only. The Large Scale Gas Injection Test (Lasgit) is a field scale experiment run by the British Geological Survey (BGS) and is located approximately 420 m underground at SKB's Aespoe Hard Rock Laboratory (HRL) in Sweden. It has been designed to study the impact on safety of gas build up within a KBS-3V concept high level radioactive waste repository. Lasgit has been in almost continuous operation for approximately seven years and is still underway. An analysis of the dataset arising from the Lasgit experiment with particular attention to the smaller scale features and phenomenon recorded has been undertaken in parallel to the macro scale analysis performed by the BGS. Lasgit is a highly instrumented, frequently sampled and long-lived experiment leading to a substantial dataset containing in excess of 14.7 million datum points. The data is anticipated to include a wealth of information, including information regarding overall processes as well as smaller scale or 'second order' features. Due to the size of the dataset coupled with the detailed analysis of the dataset required and the reduction in subjectivity associated with measurement compared to observation, computational analysis is essential. Moreover, due to the length of operation and complexity of experimental activity, the Lasgit dataset is not typically suited to 'out of the box' time series analysis algorithms. In particular, the features that are not suited to standard algorithms include non-uniformities due to (deliberate) changes in sample rate at various points in the experimental history and missing data due to hardware malfunction/failure causing interruption of logging cycles. To address these features a computational tool-kit capable of performing an Exploratory Data Analysis (EDA) on long-term, large-scale datasets with non-uniformities has been developed. Particular tool-kit abilities include: the parameterization of signal variation in the dataset

  6. Large Scale Laser Two-Photon Polymerization Structuring for Fabrication of Artificial Polymeric Scaffolds for Regenerative Medicine

    International Nuclear Information System (INIS)

    Malinauskas, M.; Purlys, V.; Zukauskas, A.; Rutkauskas, M.; Danilevicius, P.; Paipulas, D.; Bickauskaite, G.; Gadonas, R.; Piskarskas, A.; Bukelskis, L.; Baltriukiene, D.; Bukelskiene, V.; Sirmenis, R.; Gaidukeviciute, A.; Sirvydis, V.

    2010-01-01

    We present a femtosecond Laser Two-Photon Polymerization (LTPP) system of large scale three-dimensional structuring for applications in tissue engineering. The direct laser writing system enables fabrication of artificial polymeric scaffolds over a large area (up to cm in lateral size) with sub-micrometer resolution which could find practical applications in biomedicine and surgery. Yb:KGW femtosecond laser oscillator (Pharos, Light Conversion. Co. Ltd.) is used as an irradiation source (75 fs, 515 nm (frequency doubled), 80 MHz). The sample is mounted on wide range linear motor driven stages having 10 nm sample positioning resolution (XY--ALS130-100, Z--ALS130-50, Aerotech, Inc.). These stages guarantee an overall travelling range of 100 mm into X and Y directions and 50 mm in Z direction and support the linear scanning speed up to 300 mm/s. By moving the sample three-dimensionally the position of laser focus in the photopolymer is changed and one is able to write complex 3D (three-dimensional) structures. An illumination system and CMOS camera enables online process monitoring. Control of all equipment is automated via custom made computer software ''3D-Poli'' specially designed for LTPP applications. Structures can be imported from computer aided design STereoLihography (stl) files or programmed directly. It can be used for rapid LTPP structuring in various photopolymers (SZ2080, AKRE19, PEG-DA-258) which are known to be suitable for bio-applications. Microstructured scaffolds can be produced on different substrates like glass, plastic and metal. In this paper, we present microfabricated polymeric scaffolds over a large area and growing of adult rabbit myogenic stem cells on them. Obtained results show the polymeric scaffolds to be applicable for cell growth practice. It exhibit potential to use it for artificial pericardium in the experimental model in the future.

  7. FR-type radio sources in COSMOS: relation of radio structure to size, accretion modes and large-scale environment

    Science.gov (United States)

    Vardoulaki, Eleni; Faustino Jimenez Andrade, Eric; Delvecchio, Ivan; Karim, Alexander; Smolčić, Vernesa; Magnelli, Benjamin; Bertoldi, Frank; Schinnener, Eva; Sargent, Mark; Finoguenov, Alexis; VLA COSMOS Team

    2018-01-01

    The radio sources associated with active galactic nuclei (AGN) can exhibit a variety of radio