WorldWideScience

Sample records for lenses large-scale velocity

  1. Confirmation of general relativity on large scales from weak lensing and galaxy velocities.

    Science.gov (United States)

    Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E; Lombriser, Lucas; Smith, Robert E

    2010-03-11

    Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, E(G), that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to 'galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of E(G) different from the general relativistic prediction because, in these theories, the 'gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that E(G) = 0.39 +/- 0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of E(G) approximately 0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f(R) theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.

  2. Planck 2013 results. XVII. Gravitational lensing by large-scale structure

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Basak, S.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Dechelette, T.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Ho, S.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Jaffe, T.R.; Jaffe, A.H.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Laureijs, R.J.; Lavabre, A.; Lawrence, C.R.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Lewis, A.; Liguori, M.; Lilje, P.B.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Pullen, A.R.; Rachen, J.P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-01-01

    On the arcminute angular scales probed by Planck, the CMB anisotropies are gently perturbed by gravitational lensing. Here we present a detailed study of this effect, detecting lensing independently in the 100, 143, and 217GHz frequency bands with an overall significance of greater than 25sigma. We use the temperature-gradient correlations induced by lensing to reconstruct a (noisy) map of the CMB lensing potential, which provides an integrated measure of the mass distribution back to the CMB last-scattering surface. Our lensing potential map is significantly correlated with other tracers of mass, a fact which we demonstrate using several representative tracers of large-scale structure. We estimate the power spectrum of the lensing potential, finding generally good agreement with expectations from the best-fitting LCDM model for the Planck temperature power spectrum, showing that this measurement at z=1100 correctly predicts the properties of the lower-redshift, later-time structures which source the lensing ...

  3. Bias to CMB lensing reconstruction from temperature anisotropies due to large-scale galaxy motions

    Science.gov (United States)

    Ferraro, Simone; Hill, J. Colin

    2018-01-01

    Gravitational lensing of the cosmic microwave background (CMB) is expected to be amongst the most powerful cosmological tools for ongoing and upcoming CMB experiments. In this work, we investigate a bias to CMB lensing reconstruction from temperature anisotropies due to the kinematic Sunyaev-Zel'dovich (kSZ) effect, that is, the Doppler shift of CMB photons induced by Compton scattering off moving electrons. The kSZ signal yields biases due to both its own intrinsic non-Gaussianity and its nonzero cross-correlation with the CMB lensing field (and other fields that trace the large-scale structure). This kSZ-induced bias affects both the CMB lensing autopower spectrum and its cross-correlation with low-redshift tracers. Furthermore, it cannot be removed by multifrequency foreground separation techniques because the kSZ effect preserves the blackbody spectrum of the CMB. While statistically negligible for current data sets, we show that it will be important for upcoming surveys, and failure to account for it can lead to large biases in constraints on neutrino masses or the properties of dark energy. For a stage 4 CMB experiment, the bias can be as large as ≈15 % or 12% in cross-correlation with LSST galaxy lensing convergence or galaxy overdensity maps, respectively, when the maximum temperature multipole used in the reconstruction is ℓmax=4000 , and about half of that when ℓmax=3000 . Similarly, we find that the CMB lensing autopower spectrum can be biased by up to several percent. These biases are many times larger than the expected statistical errors. We validate our analytical predictions with cosmological simulations and present the first complete estimate of secondary-induced CMB lensing biases. The predicted bias is sensitive to the small-scale gas distribution, which is affected by pressure and feedback mechanisms, thus making removal via "bias-hardened" estimators challenging. Reducing ℓmax can significantly mitigate the bias at the cost of a decrease

  4. Planck 2013 results. XVII. Gravitational lensing by large-scale structure

    DEFF Research Database (Denmark)

    Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.

    2013-01-01

    On the arcminute angular scales probed by Planck, the cosmic microwave background (CMB) anisotropies are gently perturbed by gravitational lensing. Here we present a detailed study of this eect, detecting lensing independently in the 100, 143, and 217 GHz frequency bands with an overall significa...

  5. Large-scale structure after COBE: Peculiar velocities and correlations of cold dark matter halos

    Science.gov (United States)

    Zurek, Wojciech H.; Quinn, Peter J.; Salmon, John K.; Warren, Michael S.

    1994-01-01

    Large N-body simulations on parallel supercomputers allow one to simultaneously investigate large-scale structure and the formation of galactic halos with unprecedented resolution. Our study shows that the masses as well as the spatial distribution of halos on scales of tens of megaparsecs in a cold dark matter (CDM) universe with the spectrum normalized to the anisotropies detected by Cosmic Background Explorer (COBE) is compatible with the observations. We also show that the average value of the relative pairwise velocity dispersion sigma(sub v) - used as a principal argument against COBE-normalized CDM models-is significantly lower for halos than for individual particles. When the observational methods of extracting sigma(sub v) are applied to the redshift catalogs obtained from the numerical experiments, estimates differ significantly between different observation-sized samples and overlap observational estimates obtained following the same procedure.

  6. On an illusion of superluminal velocities produced by gravitational lenses

    International Nuclear Information System (INIS)

    Ingel, L.Kh.

    1981-01-01

    It is noted that gravitational lenses, by focusing the radiation of an object, increase the angle which it subtends. This in turn produces the illusion of an increase in velocities at right angles to the line of sight. Preliminary estimates are made which indicate a rather high probability of strong distortion of the observed velocities

  7. Shear wave velocity structure in North America from large-scale waveform inversions of surface waves

    Science.gov (United States)

    Alsina, D.; Woodward, R.L.; Snieder, R.K.

    1996-01-01

    A two-step nonlinear and linear inversion is carried out to map the lateral heterogeneity beneath North America using surface wave data. The lateral resolution for most areas of the model is of the order of several hundred kilometers. The most obvious feature in the tomographic images is the rapid transition between low velocities in the technically active region west of the Rocky Mountains and high velocities in the stable central and eastern shield of North America. The model also reveals smaller-scale heterogeneous velocity structures. A high-velocity anomaly is imaged beneath the state of Washington that could be explained as the subducting Juan de Fuca plate beneath the Cascades. A large low-velocity structure extends along the coast from the Mendocino to the Rivera triple junction and to the continental interior across the southwestern United States and northwestern Mexico. Its shape changes notably with depth. This anomaly largely coincides with the part of the margin where no lithosphere is consumed since the subduction has been replaced by a transform fault. Evidence for a discontinuous subduction of the Cocos plate along the Middle American Trench is found. In central Mexico a transition is visible from low velocities across the Trans-Mexican Volcanic Belt (TMVB) to high velocities beneath the Yucatan Peninsula. Two elongated low-velocity anomalies beneath the Yellowstone Plateau and the eastern Snake River Plain volcanic system and beneath central Mexico and the TMVB seem to be associated with magmatism and partial melting. Another low-velocity feature is seen at depths of approximately 200 km beneath Florida and the Atlantic Coastal Plain. The inversion technique used is based on a linear surface wave scattering theory, which gives tomographic images of the relative phase velocity perturbations in four period bands ranging from 40 to 150 s. In order to find a smooth reference model a nonlinear inversion based on ray theory is first performed. After

  8. A Measurement of Large-Scale Peculiar Velocities of Clusters of Galaxies: Technical Details

    Science.gov (United States)

    Kashlinsky, A.; Atrio-Barandela, F.; Kocevski, D.; Ebeling, H.

    2009-02-01

    This paper presents detailed analysis of large-scale peculiar motions derived from a sample of ~700 X-ray clusters and cosmic microwave background (CMB) data obtained with WMAP. We use the kinematic Sunyaev-Zeldovich (KSZ) effect combining it into a cumulative statistic that preserves the bulk motion component with the noise integrated down. Such statistic is the dipole of CMB temperature fluctuations evaluated over the pixels of the cluster catalog. To remove the cosmological CMB fluctuations the maps are filtered with a Wiener-type filter in each of the eight WMAP channels (Q, V, W) that have negligible foreground component. Our findings are as follows. The thermal SZ (TSZ) component of the clusters is described well by the Navarro-Frenk-White profile expected if the hot gas traces the dark matter in the cluster potential wells. Such gas has X-ray temperature decreasing rapidly toward the cluster outskirts, which we demonstrate results in the decrease of the TSZ component as the aperture is increased to encompass the cluster outskirts. We then detect a statistically significant dipole in the CMB pixels at cluster positions. Arising exclusively at the cluster pixels, this dipole cannot originate from the foreground or instrument noise emissions and must be produced by the CMB photons that interacted with the hot intracluster gas via the SZ effect. The dipole remains as the monopole component, due to the TSZ effect, vanishes within the small statistical noise out to the maximal aperture where we still detect the TSZ component. We demonstrate with simulations that the mask and cross-talk effects are small for our catalog and contribute negligibly to the measurements. The measured dipole thus arises from the KSZ effect produced by the coherent large-scale bulk flow motion. The cosmological implications of the measurements are discussed by us in the 2008 work of Kashlinsky et al.

  9. Large-scale structure of the Taurus molecular complex. II. Analysis of velocity fluctuations and turbulence. III. Methods for turbulence

    International Nuclear Information System (INIS)

    Kleiner, S.C.; Dickman, R.L.

    1985-01-01

    The velocity autocorrelation function (ACF) of observed spectral line centroid fluctuations is noted to effectively reproduce the actual ACF of turbulent gas motions within an interstellar cloud, thereby furnishing a framework for the study of the large scale velocity structure of the Taurus dark cloud complex traced by the present C-13O J = 1-0 observations of this region. The results obtained are discussed in the context of recent suggestions that widely observed correlations between molecular cloud widths and cloud sizes indicate the presence of a continuum of turbulent motions within the dense interstellar medium. Attention is then given to a method for the quantitative study of these turbulent motions, involving the mapping of a source in an optically thin spectral line and studying the spatial correlation properties of the resulting velocity centroid map. 61 references

  10. Kinematic Dynamo Action in the Presence of a Large Scale Velocity

    Science.gov (United States)

    Carvalho, J. C.

    1990-11-01

    RESUMEN. Se investiga la influencia de Un campo de velocidades de ran escala sobre la acci6n del tur bulento. Usando Un proceso de expansi6n, las soluciones se encuentran en el del movimiento lobal y de cizalla pequeflo y para randes de Reynolds. Se calcula la re jeneraci6n tica hasta un orden en el de expansi6n usando convectivas ciclotr6nicas para el campo turbulento de velocidad. ABSTRACT. The influence a scale velocity field upon the kinernatic turbulent dynamo action is . Usinj an expansion process, the solutions are found in the limit of small bulk motion and shear, and for Reynolds number. The majnetic is calculated up to second order in the expansion parameter usin cyclonic convective cells for the turbulent velocity field. Key o'td : HYDROMAGNETICS

  11. On the assimilation of ice velocity and concentration data into large-scale sea ice models

    Directory of Open Access Journals (Sweden)

    V. Dulière

    2007-06-01

    Full Text Available Data assimilation into sea ice models designed for climate studies has started about 15 years ago. In most of the studies conducted so far, it is assumed that the improvement brought by the assimilation is straightforward. However, some studies suggest this might not be true. In order to elucidate this question and to find an appropriate way to further assimilate sea ice concentration and velocity observations into a global sea ice-ocean model, we analyze here results from a number of twin experiments (i.e. experiments in which the assimilated data are model outputs carried out with a simplified model of the Arctic sea ice pack. Our objective is to determine to what degree the assimilation of ice velocity and/or concentration data improves the global performance of the model and, more specifically, reduces the error in the computed ice thickness. A simple optimal interpolation scheme is used, and outputs from a control run and from perturbed experiments without and with data assimilation are thoroughly compared. Our results indicate that, under certain conditions depending on the assimilation weights and the type of model error, the assimilation of ice velocity data enhances the model performance. The assimilation of ice concentration data can also help in improving the model behavior, but it has to be handled with care because of the strong connection between ice concentration and ice thickness. This study is first step towards real data assimilation into NEMO-LIM, a global sea ice-ocean model.

  12. A Mobile System for Measuring Water Surface Velocities Using Unmanned Aerial Vehicle and Large-Scale Particle Image Velocimetry

    Science.gov (United States)

    Chen, Y. L.

    2015-12-01

    Measurement technologies for velocity of river flow are divided into intrusive and nonintrusive methods. Intrusive method requires infield operations. The measuring process of intrusive methods are time consuming, and likely to cause damages of operator and instrument. Nonintrusive methods require fewer operators and can reduce instrument damages from directly attaching to the flow. Nonintrusive measurements may use radar or image velocimetry to measure the velocities at the surface of water flow. The image velocimetry, such as large scale particle image velocimetry (LSPIV) accesses not only the point velocity but the flow velocities in an area simultaneously. Flow properties of an area hold the promise of providing spatially information of flow fields. This study attempts to construct a mobile system UAV-LSPIV by using an unmanned aerial vehicle (UAV) with LSPIV to measure flows in fields. The mobile system consists of a six-rotor UAV helicopter, a Sony nex5T camera, a gimbal, an image transfer device, a ground station and a remote control device. The activate gimbal helps maintain the camera lens orthogonal to the water surface and reduce the extent of images being distorted. The image transfer device can monitor the captured image instantly. The operator controls the UAV by remote control device through ground station and can achieve the flying data such as flying height and GPS coordinate of UAV. The mobile system was then applied to field experiments. The deviation of velocities measured by UAV-LSPIV of field experiments and handhold Acoustic Doppler Velocimeter (ADV) is under 8%. The results of the field experiments suggests that the application of UAV-LSPIV can be effectively applied to surface flow studies.

  13. Constraining gravity at the largest scales through CMB lensing and galaxy velocities

    Science.gov (United States)

    Pullen, Anthony R.; Alam, Shadab; He, Siyu; Ho, Shirley

    2016-08-01

    We demonstrate a new method to constrain gravity on the largest cosmological scales by combining measurements of cosmic microwave background (CMB) lensing and the galaxy velocity field. EG is a statistic, constructed from a gravitational lensing tracer and a measure of velocities such as redshift-space distortions (RSD), that can discriminate between gravity models while being independent of clustering bias and σ8. While traditionally, the lensing field for EG has been probed through galaxy lensing, CMB lensing has been proposed as a more robust tracer of the lensing field for EG at higher redshifts while avoiding intrinsic alignments. We perform the largest-scale measurement of EG ever, up to 150 Mpc h-1, by cross-correlating the Planck CMB lensing map with the Sloan Digital Sky Survey III (SDSS-III) CMASS galaxy sample and combining this with our measurement of the CMASS auto-power spectrum and the RSD parameter β. We report EG(z = 0.57) = 0.243 ± 0.060 (stat) ± 0.013 (sys), a measurement in tension with the general relativity (GR) prediction at a level of 2.6σ. Note that our EG measurement deviates from GR only at scales greater than 80 Mpc h-1, scales which have not been probed by previous EG tests. Upcoming surveys, which will provide an order-of-magnitude reduction in statistical errors, can significantly constrain alternative gravity models when combined with better control of systematics.

  14. Constraining the baryon-dark matter relative velocity with the large-scale three-point correlation function of the SDSS BOSS DR12 CMASS galaxies

    Science.gov (United States)

    Slepian, Zachary; Eisenstein, Daniel J.; Blazek, Jonathan A.; Brownstein, Joel R.; Chuang, Chia-Hsun; Gil-Marín, Héctor; Ho, Shirley; Kitaura, Francisco-Shu; McEwen, Joseph E.; Percival, Will J.; Ross, Ashley J.; Rossi, Graziano; Seo, Hee-Jong; Slosar, Anže; Vargas-Magaña, Mariana

    2018-02-01

    We search for a galaxy clustering bias due to a modulation of galaxy number with the baryon-dark matter relative velocity resulting from recombination-era physics. We find no detected signal and place the constraint bv baryon acoustic oscillation (BAO) method measurements of the cosmic distance scale using the two-point clustering. Our limit on the relative velocity bias indicates a systematic shift of no more than 0.3 per cent rms in the distance scale inferred from the BAO feature in the BOSS two-point clustering, well below the 1 per cent statistical error of this measurement. This constraint is the most stringent currently available and has important implications for the ability of upcoming large-scale structure surveys such as the Dark Energy Spectroscopic Instrument (DESI) to self-protect against the relative velocity as a possible systematic.

  15. Apparent Dependence of Rate- and State-Dependent Friction Parameters on Loading Velocity and Cumulative Displacement Inferred from Large-Scale Biaxial Friction Experiments

    Science.gov (United States)

    Urata, Yumi; Yamashita, Futoshi; Fukuyama, Eiichi; Noda, Hiroyuki; Mizoguchi, Kazuo

    2017-06-01

    We investigated the constitutive parameters in the rate- and state-dependent friction (RSF) law by conducting numerical simulations, using the friction data from large-scale biaxial rock friction experiments for Indian metagabbro. The sliding surface area was 1.5 m long and 0.5 m wide, slid for 400 s under a normal stress of 1.33 MPa at a loading velocity of either 0.1 or 1.0 mm/s. During the experiments, many stick-slips were observed and those features were as follows. (1) The friction drop and recurrence time of the stick-slip events increased with cumulative slip displacement in an experiment before which the gouges on the surface were removed, but they became almost constant throughout an experiment conducted after several experiments without gouge removal. (2) The friction drop was larger and the recurrence time was shorter in the experiments with faster loading velocity. We applied a one-degree-of-freedom spring-slider model with mass to estimate the RSF parameters by fitting the stick-slip intervals and slip-weakening curves measured based on spring force and acceleration of the specimens. We developed an efficient algorithm for the numerical time integration, and we conducted forward modeling for evolution parameters ( b) and the state-evolution distances (L_{{c}}), keeping the direct effect parameter ( a) constant. We then identified the confident range of b and L_{{c}} values. Comparison between the results of the experiments and our simulations suggests that both b and L_{{c}} increase as the cumulative slip displacement increases, and b increases and L_{{c}} decreases as the loading velocity increases. Conventional RSF laws could not explain the large-scale friction data, and more complex state evolution laws are needed.

  16. Rectification of Image Velocity Results (RIVeR): A simple and user-friendly toolbox for large scale water surface Particle Image Velocimetry (PIV) and Particle Tracking Velocimetry (PTV)

    Science.gov (United States)

    Patalano, Antoine; García, Carlos Marcelo; Rodríguez, Andrés

    2017-12-01

    LSPIV (Large Scale Particle Image Velocimetry) and LSPTV (Large Scale Particle Tracking Velocimetry) are used as relatively low-cost and non-intrusive techniques for water-surface velocity analysis and flow discharge measurements in rivers or large-scale hydraulic models. This paper describes a methodology based on state-of-the-art tools (for example, that apply classical PIV/PTV analysis) resulting in large-scale surface-flow characterization according to the first operational version of the RIVeR (Rectification of Image Velocity Results). RIVeR is developed in Matlab and is designed to be user-friendly. RIVeR processes large-scale water-surface characterization such as velocity fields or individual trajectories of floating tracers. This work describes the wide range of application of the techniques for comparing measured surface flows in hydraulic physical models to flow discharge estimates for a wide range of flow events in rivers (for example, low and high flows).

  17. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  18. Surface flux transport simulations: Effect of inflows toward active regions and random velocities on the evolution of the Sun's large-scale magnetic field

    Science.gov (United States)

    Martin-Belda, D.; Cameron, R. H.

    2016-02-01

    Aims: We aim to determine the effect of converging flows on the evolution of a bipolar magnetic region (BMR), and to investigate the role of these inflows in the generation of poloidal flux. We also discuss whether the flux dispersal due to turbulent flows can be described as a diffusion process. Methods: We developed a simple surface flux transport model based on point-like magnetic concentrations. We tracked the tilt angle, the magnetic flux and the axial dipole moment of a BMR in simulations with and without inflows and compared the results. To test the diffusion approximation, simulations of random walk dispersal of magnetic features were compared against the predictions of the diffusion treatment. Results: We confirm the validity of the diffusion approximation to describe flux dispersal on large scales. We find that the inflows enhance flux cancellation, but at the same time affect the latitudinal separation of the polarities of the bipolar region. In most cases the latitudinal separation is limited by the inflows, resulting in a reduction of the axial dipole moment of the BMR. However, when the initial tilt angle of the BMR is small, the inflows produce an increase in latitudinal separation that leads to an increase in the axial dipole moment in spite of the enhanced flux destruction. This can give rise to a tilt of the BMR even when the BMR was originally aligned parallel to the equator.

  19. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESUL...

  20. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  1. Anomalous scaling of passive scalar fields advected by the Navier-Stokes velocity ensemble: effects of strong compressibility and large-scale anisotropy.

    Science.gov (United States)

    Antonov, N V; Kostenko, M M

    2014-12-01

    The field theoretic renormalization group and the operator product expansion are applied to two models of passive scalar quantities (the density and the tracer fields) advected by a random turbulent velocity field. The latter is governed by the Navier-Stokes equation for compressible fluid, subject to external random force with the covariance ∝δ(t-t')k(4-d-y), where d is the dimension of space and y is an arbitrary exponent. The original stochastic problems are reformulated as multiplicatively renormalizable field theoretic models; the corresponding renormalization group equations possess infrared attractive fixed points. It is shown that various correlation functions of the scalar field, its powers and gradients, demonstrate anomalous scaling behavior in the inertial-convective range already for small values of y. The corresponding anomalous exponents, identified with scaling (critical) dimensions of certain composite fields ("operators" in the quantum-field terminology), can be systematically calculated as series in y. The practical calculation is performed in the leading one-loop approximation, including exponents in anisotropic contributions. It should be emphasized that, in contrast to Gaussian ensembles with finite correlation time, the model and the perturbation theory presented here are manifestly Galilean covariant. The validity of the one-loop approximation and comparison with Gaussian models are briefly discussed.

  2. Large-scale vertical velocity, diabatic heating and drying profiles associated with seasonal and diurnal variations of convective systems observed in the GoAmazon2014/5 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Shuaiqi; Xie, Shaocheng; Zhang, Yunyan; Zhang, Minghua; Schumacher, Courtney; Upton, Hannah; Jensen, Michael P.; Johnson, Karen L.; Wang, Meng; Ahlgrimm, Maike; Feng, Zhe; Minnis, Patrick; Thieman, Mandana

    2016-01-01

    This study describes the characteristics of large-scale vertical velocity, apparent heating source (Q1) and apparent moisture sink (Q2) profiles associated with seasonal and diurnal variations of convective systems observed during the two intensive operational periods (IOPs) that were conducted from 15 February to 26 March 2014 (wet season) and from 1 September to 10 October 2014 (dry season) near Manaus, Brazil, during the Green Ocean Amazon (GoAmazon2014/5) experiment. The derived large-scale fields have large diurnal variations according to convective activity in the GoAmazon region and the morning profiles show distinct differences between the dry and wet seasons. In the wet season, propagating convective systems originating far from the GoAmazon region are often seen in the early morning, while in the dry season they are rarely observed. Afternoon convective systems due to solar heating are frequently seen in both seasons. Accordingly, in the morning, there is strong upward motion and associated heating and drying throughout the entire troposphere in the wet season, which is limited to lower levels in the dry season. In the afternoon, both seasons exhibit weak heating and strong moistening in the boundary layer related to the vertical convergence of eddy fluxes. A set of case studies of three typical types of convective systems occurring in Amazonia – i.e., locally occurring systems, coastal-occurring systems and basin-occurring systems – is also conducted to investigate the variability of the large-scale environment with different types of convective systems.

  3. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  4. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... based on measurements on the Marstal plant, Denmark, and through comparison with published and unpublished data from other plants. Evaluations on the thermal, economical and environmental performance are repored, based on experiences from the last decade. For detailed designing, a computer simulation...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors...

  5. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  6. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  7. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  8. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    Lukacs, B.; Meszaros, A.

    1984-12-01

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the cosmological velocity field, but even in such a case, the standard model is isolated from the inhomogeneous ones when the whole evolution is considered. (author)

  9. Gravitational lensing and microlensing

    CERN Document Server

    Mollerach, Silvia

    2002-01-01

    This book provides a comprehensive and self-contained exposition of gravitational lensing phenomena. It presents the up-to-date status of gravitational lensing and microlensing, covering the cosmological applications of the observed lensing by galaxies, clusters and the large scale structures, as well as the microlensing searches in the Local Group and its applications to unveil the nature of the galactic dark matter, the search for planetary objects and the distribution of faint stars in our galaxy. Gravitational Lensing and Microlensing is pitched at the level of the graduate student interes

  10. Doppler term in the galaxy two-point correlation function: Wide-angle, velocity, Doppler lensing and cosmic acceleration effects

    Science.gov (United States)

    Raccanelli, Alvise; Bertacca, Daniele; Jeong, Donghui; Neyrinck, Mark C.; Szalay, Alexander S.

    2018-03-01

    We study the parity-odd part (that we shall call Doppler term) of the linear galaxy two-point correlation function that arises from wide-angle, velocity, Doppler lensing and cosmic acceleration effects. As it is important at low redshift and at large angular separations, the Doppler term is usually neglected in the current generation of galaxy surveys. For future wide-angle galaxy surveys, however, we show that the Doppler term must be included. The effect of these terms is dominated by the magnification due to relativistic aberration effects and the slope of the galaxy redshift distribution and it generally mimics the effect of the local type primordial non-Gaussianity with the effective nonlinearity parameter fNLeff of a few; we show that this would affect forecasts on measurements of fNL at low-redshift. Our results show that a survey at low redshift with large number density over a wide area of the sky could detect the Doppler term with a signal-to-noise ratio of ∼ 1 - 20, depending on survey specifications.

  11. Theory of pixel lensing towards M31 II, The velocity anisotropy and flattening of the MACHO distribution

    CERN Document Server

    Kerins, E; Evans, N W; Baillon, Paul; Carr, B J; Giraud-Héraud, Yannick; Gould, A; Hewett, P C; Kaplan, J; Paulin-Henriksson, S; Smartt, S J; Tsapras, Y; Valls-Gabaud, D

    2003-01-01

    The POINT-AGAPE collaboration is currently searching for massive compact halo objects (MACHOs) towards the Andromeda galaxy (M31). The survey aims to exploit the high inclination of the M31 disk, which causes an asymmetry in the spatial distribution of M31 MACHOs. Here, we investigate the effects of halo velocity anisotropy and flattening on the asymmetry signal using simple halo models. For a spherically symmetric and isotropic halo, we find that the underlying pixel-lensing rate in far-disk M31 MACHOs is more than 5 times the rate of near-disk events. We find that the asymmetry is increased further by about 30% if the MACHOs occupy radial orbits rather than tangential orbits, but is substantially reduced if the MACHOs lie in a flattened halo. However, even for haloes with a minor-to-major axis ratio q = 0.3, the numbers of M31 MACHOs in the far-side outnumber those in the near-side by a factor of ~2. We show that, if positional information is exploited in addition to number counts, then the number of candid...

  12. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  13. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  14. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    .synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www...... try to develop new aesthetic potentials for the concrete, in large scales that has not been seen before in the ceramic area. It is expected to result in new types of large scale and very thin, glazed concrete façades in building. If such are introduced in an architectural context as exposed surfaces...

  15. Temporal Variation of Large Scale Flows in the Solar Interior ...

    Indian Academy of Sciences (India)

    tribpo

    Temporal Variation of Large Scale Flows in the Solar Interior. 355. Figure 2. Zonal and meridional components of the time-dependent residual velocity at a few selected depths as marked above each panel, are plotted as contours of constant velocity in the longitude-latitude plane. The left panels show the zonal component, ...

  16. Temporal Variation of Large Scale Flows in the Solar Interior ...

    Indian Academy of Sciences (India)

    tribpo

    Abstract. We attempt to detect short term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to. Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of ...

  17. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  18. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  19. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  20. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  1. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  2. Large scale homing in honeybees.

    Directory of Open Access Journals (Sweden)

    Mario Pahl

    Full Text Available Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama.

  3. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  4. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  5. Large-Scale Information Systems

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  6. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  7. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  8. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    Science.gov (United States)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4Destiny will be used in its third year as a high resolution, wide-field imager to conduct a weak lensing survey covering >lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  9. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  10. Large-scale Flow and Transport of Magnetic Flux in the Solar ...

    Indian Academy of Sciences (India)

    tribpo

    Abstract. Horizontal large-scale velocity field describes horizontal displacement of the photospheric magnetic flux in zonal and meridian directions. The flow systems of solar plasma, constructed according to the velocity field, create the large-scale cellular-like patterns with up-flow in the center and the down-flow on the ...

  11. Signatures of non-universal large scales in conditional structure functions from various turbulent flows

    International Nuclear Information System (INIS)

    Blum, Daniel B; Voth, Greg A; Bewley, Gregory P; Bodenschatz, Eberhard; Gibert, Mathieu; Xu Haitao; Gylfason, Ármann; Mydlarski, Laurent; Yeung, P K

    2011-01-01

    We present a systematic comparison of conditional structure functions in nine turbulent flows. The flows studied include forced isotropic turbulence simulated on a periodic domain, passive grid wind tunnel turbulence in air and in pressurized SF 6 , active grid wind tunnel turbulence (in both synchronous and random driving modes), the flow between counter-rotating discs, oscillating grid turbulence and the flow in the Lagrangian exploration module (in both constant and random driving modes). We compare longitudinal Eulerian second-order structure functions conditioned on the instantaneous large-scale velocity in each flow to assess the ways in which the large scales affect the small scales in a variety of turbulent flows. Structure functions are shown to have larger values when the large-scale velocity significantly deviates from the mean in most flows, suggesting that dependence on the large scales is typical in many turbulent flows. The effects of the large-scale velocity on the structure functions can be quite strong, with the structure function varying by up to a factor of 2 when the large-scale velocity deviates from the mean by ±2 standard deviations. In several flows, the effects of the large-scale velocity are similar at all the length scales we measured, indicating that the large-scale effects are scale independent. In a few flows, the effects of the large-scale velocity are larger on the smallest length scales. (paper)

  12. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  13. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  14. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  15. SDI Large-Scale System Technology Study

    National Research Council Canada - National Science Library

    1986-01-01

    .... This coordination is addressed by the Battle Management function. The algorithms and technologies required to support Battle Management are the subject of the SDC Large Scale Systems Technology Study...

  16. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  17. Solving large scale structure in ten easy steps with COLA

    International Nuclear Information System (INIS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-01-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10 9 M s un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10 11 M s un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed

  18. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  19. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  20. Large-scale nanophotonic phased array.

    Science.gov (United States)

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  1. Large scale EMF in current sheets induced by tearing modes

    Science.gov (United States)

    Mizerski, Krzysztof A.

    2018-02-01

    An extension of the analysis of resistive instabilities of a sheet pinch from a famous work by Furth et al (1963 Phys. Fluids 6 459) is presented here, to study the mean electromotive force (EMF) generated by the developing instability. In a Cartesian configuration and in the presence of a current sheet first the boundary layer technique is used to obtain global, matched asymptotic solutions for the velocity and magnetic field and then the solutions are used to calculate the large-scale EMF in the system. It is reported, that in the bulk the curl of the mean EMF is linear in {{j}}0\\cdot {{B}}0, a simple pseudo-scalar quantity constructed from the large-scale quantities.

  2. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  3. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  4. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such ...

  5. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  6. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  7. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  8. Fractals and cosmological large-scale structure

    Science.gov (United States)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  9. Large scale processing of dielectric electroactive polymers

    DEFF Research Database (Denmark)

    Vudayagiri, Sindhu

    Efficient processing techniques are vital to the success of any manufacturing industry. The processing techniques determine the quality of the products and thus to a large extent the performance and reliability of the products that are manufactured. The dielectric electroactive polymer (DEAP......) technology is relatively new and is in the initial stages of development with no established large scale manufacturing techniques. Danfoss Polypower A/S has set up a large scale manufacture process to make thin film DEAP transducers. The DEAP transducers developed by Danfoss Polypower consist...... of microstructured elastomer surfaces on which the compliant metallic electrodes are sputtered thus enabling large strains of non-stretchable metal electrode. Thin microstructured polydimethlysiloxane (PDMS) films are quintessential in DEAP technology due to scaling of their actuation strain with the reciprocal...

  10. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...... main focus. Here the general perception of the nature and role in society of large scale networks as a fundamental infrastructure is analysed. This analysis focuses on the effects of the technical DDN projects and on the perception of network infrastructure as expressed by key decision makers...

  11. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  12. Large-scale computer-aided design

    OpenAIRE

    Adeli, Hojjat

    1997-01-01

    The author and his associates have been 'working on creating novel design theories and computational models with two broad objectives: automation and optimization. This paper is a summary of the author's Keynote Lecture based on the research done by the author and his associates recently. Novel neurocomputing algorithms are presented for large-scale computer-aided design and optimization. This research demonstrates how a new level is achieved in design automation through the ingenious use and...

  13. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  14. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  15. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  16. The consistency problems of large scale structure

    International Nuclear Information System (INIS)

    Schramm, D.N.

    1986-01-01

    Studies of the early universe are reviewed, with emphasis on galaxy formation, dark matter and the generation of large scale structure. The paper was presented at the conference on ''The early universe and its evolution'', Erice, Italy, 1986. Dark matter, Big Bang nucleosynthesis, baryonic halos, flatness arguments, cosmological constant, galaxy formation, neutrinos plus strings or explosions and string models, are all discussed. (U.K.)

  17. Large-Scale Visual Data Analysis

    Science.gov (United States)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  18. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  19. Large-scale motions in the universe: a review

    International Nuclear Information System (INIS)

    Burstein, D.

    1990-01-01

    The expansion of the universe can be retarded in localised regions within the universe both by the presence of gravity and by non-gravitational motions generated in the post-recombination universe. The motions of galaxies thus generated are called 'peculiar motions', and the amplitudes, size scales and coherence of these peculiar motions are among the most direct records of the structure of the universe. As such, measurements of these properties of the present-day universe provide some of the severest tests of cosmological theories. This is a review of the current evidence for large-scale motions of galaxies out to a distance of ∼5000 km s -1 (in an expanding universe, distance is proportional to radial velocity). 'Large-scale' in this context refers to motions that are correlated over size scales larger than the typical sizes of groups of galaxies, up to and including the size of the volume surveyed. To orient the reader into this relatively new field of study, a short modern history is given together with an explanation of the terminology. Careful consideration is given to the data used to measure the distances, and hence the peculiar motions, of galaxies. The evidence for large-scale motions is presented in a graphical fashion, using only the most reliable data for galaxies spanning a wide range in optical properties and over the complete range of galactic environments. The kinds of systematic errors that can affect this analysis are discussed, and the reliability of these motions is assessed. The predictions of two models of large-scale motion are compared to the observations, and special emphasis is placed on those motions in which our own Galaxy directly partakes. (author)

  20. The kinematical structure of gravitationally lensed arcs

    NARCIS (Netherlands)

    Moller, O; Noordermeer, E

    2006-01-01

    In this paper, the expected properties of the velocity fields of strongly lensed arcs behind galaxy clusters are investigated. The velocity profile along typical lensed arcs is determined by ray-tracing light rays from a model source galaxy through parametric cluster toy models consisting of

  1. Experimental Investigation of Large-Scale Bubbly Plumes

    Energy Technology Data Exchange (ETDEWEB)

    Zboray, R.; Simiano, M.; De Cachard, F

    2004-03-01

    Carefully planned and instrumented experiments under well-defined boundary conditions have been carried out on large-scale, isothermal, bubbly plumes. The data obtained is meant to validate newly developed, high-resolution numerical tools for 3D transient, two-phase flow modelling. Several measurement techniques have been utilised to collect data from the experiments: particle image velocimetry, optical probes, electromagnetic probes, and visualisation. Bubble and liquid velocity fields, void-fraction distributions, bubble size and interfacial-area-concentration distributions have all been measured in the plume region, as well as recirculation velocities in the surrounding pool. The results obtained from the different measurement techniques have been compared. In general, the two-phase flow data obtained from the different techniques are found to be consistent, and of high enough quality for validating numerical simulation tools for 3D bubbly flows. (author)

  2. Experimental Investigation of Large-Scale Bubbly Plumes

    International Nuclear Information System (INIS)

    Zboray, R.; Simiano, M.; De Cachard, F.

    2004-01-01

    Carefully planned and instrumented experiments under well-defined boundary conditions have been carried out on large-scale, isothermal, bubbly plumes. The data obtained is meant to validate newly developed, high-resolution numerical tools for 3D transient, two-phase flow modelling. Several measurement techniques have been utilised to collect data from the experiments: particle image velocimetry, optical probes, electromagnetic probes, and visualisation. Bubble and liquid velocity fields, void-fraction distributions, bubble size and interfacial-area-concentration distributions have all been measured in the plume region, as well as recirculation velocities in the surrounding pool. The results obtained from the different measurement techniques have been compared. In general, the two-phase flow data obtained from the different techniques are found to be consistent, and of high enough quality for validating numerical simulation tools for 3D bubbly flows. (author)

  3. Hypersingular integral equations, waveguiding effects in Cantorian Universe and genesis of large scale structures

    International Nuclear Information System (INIS)

    Iovane, G.; Giordano, P.

    2005-01-01

    In this work we introduce the hypersingular integral equations and analyze a realistic model of gravitational waveguides on a cantorian space-time. A waveguiding effect is considered with respect to the large scale structure of the Universe, where the structure formation appears as if it were a classically self-similar random process at all astrophysical scales. The result is that it seems we live in an El Naschie's o (∞) Cantorian space-time, where gravitational lensing and waveguiding effects can explain the appearing Universe. In particular, we consider filamentary and planar large scale structures as possible refraction channels for electromagnetic radiation coming from cosmological structures. From this vision the Universe appears like a large self-similar adaptive mirrors set, thanks to three numerical simulations. Consequently, an infinite Universe is just an optical illusion that is produced by mirroring effects connected with the large scale structure of a finite and not a large Universe

  4. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  5. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    ) and with majority of images having a proportion larger than one, but less than e.g. the golden ratio. Furthermore, more images have the inversed proportion, meaning that portrait paintings are more common than landscape paintings. The inverse is true for photographs, i.e. more landscape than portrait format......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  6. Large scale phononic metamaterials for seismic isolation

    Energy Technology Data Exchange (ETDEWEB)

    Aravantinos-Zafiris, N. [Department of Sound and Musical Instruments Technology, Ionian Islands Technological Educational Institute, Stylianou Typaldou ave., Lixouri 28200 (Greece); Sigalas, M. M. [Department of Materials Science, University of Patras, Patras 26504 (Greece)

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  7. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  8. Large scale phononic metamaterials for seismic isolation

    International Nuclear Information System (INIS)

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-01-01

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials

  9. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  10. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  11. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  12. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  13. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  14. Large-scale impacts of hydroelectric development

    International Nuclear Information System (INIS)

    Rosenberg, D.M.; Bodaly, R.A.; Hecky, R.E.; Rudd, J.W.M.; Berkes, F.; Kelly, C.A.

    1997-01-01

    A study was conducted in which the cumulative environmental effects of mega-hydroelectric development projects such as the James Bay development in Canada, the Sardar Sarovar development in India and the Three Gorges development in China were examined. The extent of flooding as a result of these projects and of many others around the world was presented. The study showed that several factors are responsible for methyl mercury (MeHg) bioaccumulation in reservoirs. The study also revealed that reservoirs can be a significant source of greenhouse gas emissions. Boreal forests in particular, when flooded, become a strong source of greenhouse gases to the atmosphere. This results from the fact that after flooding a boreal forest changes from being a small carbon sink to a large source of carbon to the atmosphere, due to stimulated microbial production of CO 2 and CH 4 by decomposition of plant tissues and peat. This increased decomposition also results in an increase of another microbial activity, namely the methylation of inorganic mercury to the much more toxic MeHg. Selected examples of the downstream effects of altered flows caused by large-scale hydroelectric developments world-wide were summarized. A similar tabulation provided examples of social impacts of relocation of people necessitated by large-scale hydroelectric development. 209 refs., 10 tabs., 3 figs

  15. Cosmology with weak lensing surveys

    International Nuclear Information System (INIS)

    Munshi, Dipak; Valageas, Patrick; Waerbeke, Ludovic van; Heavens, Alan

    2008-01-01

    Weak gravitational lensing is responsible for the shearing and magnification of the images of high-redshift sources due to the presence of intervening matter. The distortions are due to fluctuations in the gravitational potential, and are directly related to the distribution of matter and to the geometry and dynamics of the Universe. As a consequence, weak gravitational lensing offers unique possibilities for probing the Dark Matter and Dark Energy in the Universe. In this review, we summarise the theoretical and observational state of the subject, focussing on the statistical aspects of weak lensing, and consider the prospects for weak lensing surveys in the future. Weak gravitational lensing surveys are complementary to both galaxy surveys and cosmic microwave background (CMB) observations as they probe the unbiased non-linear matter power spectrum at modest redshifts. Most of the cosmological parameters are accurately estimated from CMB and large-scale galaxy surveys, so the focus of attention is shifting to understanding the nature of Dark Matter and Dark Energy. On the theoretical side, recent advances in the use of 3D information of the sources from photometric redshifts promise greater statistical power, and these are further enhanced by the use of statistics beyond two-point quantities such as the power spectrum. The use of 3D information also alleviates difficulties arising from physical effects such as the intrinsic alignment of galaxies, which can mimic weak lensing to some extent. On the observational side, in the next few years weak lensing surveys such as CFHTLS, VST-KIDS and Pan-STARRS, and the planned Dark Energy Survey, will provide the first weak lensing surveys covering very large sky areas and depth. In the long run even more ambitious programmes such as DUNE, the Supernova Anisotropy Probe (SNAP) and Large-aperture Synoptic Survey Telescope (LSST) are planned. Weak lensing of diffuse components such as the CMB and 21 cm emission can also

  16. Radiations: large scale monitoring in Japan

    International Nuclear Information System (INIS)

    Linton, M.; Khalatbari, A.

    2011-01-01

    As the consequences of radioactive leaks on their health are a matter of concern for Japanese people, a large scale epidemiological study has been launched by the Fukushima medical university. It concerns the two millions inhabitants of the Fukushima Prefecture. On the national level and with the support of public funds, medical care and follow-up, as well as systematic controls are foreseen, notably to check the thyroid of 360.000 young people less than 18 year old and of 20.000 pregnant women in the Fukushima Prefecture. Some measurements have already been performed on young children. Despite the sometimes rather low measures, and because they know that some parts of the area are at least as much contaminated as it was the case around Chernobyl, some people are reluctant to go back home

  17. Large scale study of tooth enamel

    International Nuclear Information System (INIS)

    Bodart, F.; Deconninck, G.; Martin, M.T.

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. On hundred eighty samples of teeth were first analyzed using PIXE, backscattering and nuclear reaction techniques. The results were analyzed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population. (author)

  18. Large Scale Landform Mapping Using Lidar DEM

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-08-01

    Full Text Available In this study, LIDAR DEM data was used to obtain a primary landform map in accordance with a well-known methodology. This primary landform map was generalized using the Focal Statistics tool (Majority, considering the minimum area condition in cartographic generalization in order to obtain landform maps at 1:1000 and 1:5000 scales. Both the primary and the generalized landform maps were verified visually with hillshaded DEM and an orthophoto. As a result, these maps provide satisfactory visuals of the landforms. In order to show the effect of generalization, the area of each landform in both the primary and the generalized maps was computed. Consequently, landform maps at large scales could be obtained with the proposed methodology, including generalization using LIDAR DEM.

  19. [Stress management in large-scale establishments].

    Science.gov (United States)

    Fukasawa, Kenji

    2002-07-01

    Due to a recent dramatic change in industrial structures in Japan, the role of large-scale enterprises is changing. Mass production used to be the major income sources of companies, but nowadays it has changed to high value-added products, including, software development. As a consequence of highly competitive inter-corporate development, there are various sources of job stress which induce health problems in employees, especially those concerned with development or management. To simply to obey the law or offer medical care are not enough to achieve management of these problems. Occupational health staff need to act according to the disease type and provide care with support from the Supervisor and Personnel Division. And for the training, development and consultation system, occupational health staff must work with the Personnel Division and Safety Division, and be approved by management supervisors.

  20. Large - scale Rectangular Ruler Automated Verification Device

    Science.gov (United States)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  1. Engineering management of large scale systems

    Science.gov (United States)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  2. Modelling large-scale hydrogen infrastructure development

    International Nuclear Information System (INIS)

    De Groot, A.; Smit, R.; Weeda, M.

    2005-08-01

    In modelling a possible H2 infrastructure development the following questions are answered in this presentation: How could the future demand for H2 develop in the Netherlands?; and In which year and where would it be economically viable to construct a H2 infrastructure in the Netherlands? Conclusions are that: A model for describing a possible future H2 infrastructure is successfully developed; The model is strongly regional and time dependent; Decrease of fuel cell cost appears to be a sensitive parameter for development of H2 demand; Cost-margin between large-scale and small-scale H2 production is a main driver for development of a H2 infrastructure; A H2 infrastructure seems economically viable in the Netherlands starting from the year 2022

  3. Large-scale digitizer system, analog converters

    International Nuclear Information System (INIS)

    Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.

    1976-10-01

    Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved

  4. Introducing Large-Scale Innovation in Schools

    Science.gov (United States)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  5. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  6. Modulation of energetic coherent motions by large-scale topography

    Science.gov (United States)

    Lai, Wing; Hamed, Ali M.; Troolin, Dan; Chamorro, Leonardo P.

    2016-11-01

    The distinctive characteristics and dynamics of the large-scale coherent motions induced over 2D and 3D large-scale wavy walls were explored experimentally with time-resolved volumetric PIV, and selected wall-normal high-resolution stereo PIV in a refractive-index-matching channel. The 2D wall consists of a sinusoidal wave in the streamwise direction with amplitude to wavelength ratio a/ λx = 0.05, while the 3D wall has an additional wave in the spanwise direction with a/ λy = 0.1. The ?ow was characterized at Re 8000, based on the bulk velocity and the channel half height. The walls are such that the amplitude to boundary layer thickness ratio is a/ δ99 0.1, which resemble geophysical-like topography. Insight on the dynamics of the coherent motions, Reynolds stress and spatial interaction of sweep and ejection events will be discussed in terms of the wall topography modulation.

  7. Systematic renormalization of the effective theory of Large Scale Structure

    International Nuclear Information System (INIS)

    Abolhasani, Ali Akbar; Mirbabayi, Mehrdad; Pajer, Enrico

    2016-01-01

    A perturbative description of Large Scale Structure is a cornerstone of our understanding of the observed distribution of matter in the universe. Renormalization is an essential and defining step to make this description physical and predictive. Here we introduce a systematic renormalization procedure, which neatly associates counterterms to the UV-sensitive diagrams order by order, as it is commonly done in quantum field theory. As a concrete example, we renormalize the one-loop power spectrum and bispectrum of both density and velocity. In addition, we present a series of results that are valid to all orders in perturbation theory. First, we show that while systematic renormalization requires temporally non-local counterterms, in practice one can use an equivalent basis made of local operators. We give an explicit prescription to generate all counterterms allowed by the symmetries. Second, we present a formal proof of the well-known general argument that the contribution of short distance perturbations to large scale density contrast δ and momentum density π(k) scale as k 2 and k, respectively. Third, we demonstrate that the common practice of introducing counterterms only in the Euler equation when one is interested in correlators of δ is indeed valid to all orders.

  8. Statistical Analysis of Large-Scale Structure of Universe

    Science.gov (United States)

    Tugay, A. V.

    While galaxy cluster catalogs were compiled many decades ago, other structural elements of cosmic web are detected at definite level only in the newest works. For example, extragalactic filaments were described by velocity field and SDSS galaxy distribution during the last years. Large-scale structure of the Universe could be also mapped in the future using ATHENA observations in X-rays and SKA in radio band. Until detailed observations are not available for the most volume of Universe, some integral statistical parameters can be used for its description. Such methods as galaxy correlation function, power spectrum, statistical moments and peak statistics are commonly used with this aim. The parameters of power spectrum and other statistics are important for constraining the models of dark matter, dark energy, inflation and brane cosmology. In the present work we describe the growth of large-scale density fluctuations in one- and three-dimensional case with Fourier harmonics of hydrodynamical parameters. In result we get power-law relation for the matter power spectrum.

  9. Large scale organized motion in isothermal swirling flow through an axisymmetric dump combustor

    International Nuclear Information System (INIS)

    Daddis, E.D.; Lieber, B.B.; Nejad, A.S.; Ahmed, S.A.

    1990-01-01

    This paper reports on velocity measurements that were obtained in a model axisymmetric dump combustor which included a coaxial swirler by means of a two component laser Doppler velocimeter (LDV) at a Reynolds number of 125,000. The frequency spectrum of the velocity fluctuations is obtained via the Fast Fourier Transform (FFT). The velocity field downstream of the dump plane is characterized, in addition to background turbulence, by large scale organized structures which are manifested as sharp spikes of the spectrum at relatively low frequencies. The decomposition of velocity disturbances to background turbulence and large scale structures can then be achieved through spectral methods which include matched filters and spectral factorization. These methods are demonstrated here for axial velocity obtained one step height downstream of the dump plane. Subsequent analysis of the various velocity disturbances shows that large scale structures account for about 25% of the apparent normal stresses at this particular location. Naturally, large scale structures evolve spatially and their contribution to the apparent stress tensor may vary depending on the location in the flow field

  10. Large-scale transport across narrow gaps in rod bundles

    Energy Technology Data Exchange (ETDEWEB)

    Guellouz, M.S.; Tavoularis, S. [Univ. of Ottawa (Canada)

    1995-09-01

    Flow visualization and how-wire anemometry were used to investigate the velocity field in a rectangular channel containing a single cylindrical rod, which could be traversed on the centreplane to form gaps of different widths with the plane wall. The presence of large-scale, quasi-periodic structures in the vicinity of the gap has been demonstrated through flow visualization, spectral analysis and space-time correlation measurements. These structures are seen to exist even for relatively large gaps, at least up to W/D=1.350 (W is the sum of the rod diameter, D, and the gap width). The above measurements appear to compatible with the field of a street of three-dimensional, counter-rotating vortices, whose detailed structure, however, remains to be determined. The convection speed and the streamwise spacing of these vortices have been determined as functions of the gap size.

  11. A Large Scale PIV Investigation of a Flap Edge Vortex

    Science.gov (United States)

    Walker, Stephen M.; Alkislar, M. B.; Lourenco, L.; Krothapalli, A.

    1996-11-01

    A recent experiment at NASA/Ames Research Center demonstrated the application of a large scale 'on-line' Particle Image Velocimetry, (PIV), in a 7' x 10' wind tunnel. Data was collected for freestream velocities in the range from approximately 40 m/sec to 100 m/sec. The flow field of interest for this investigation was a vortex that was generated by a flap edge. The model was an unswept wing, having a span of 5 ft and a chord, (c), of 2.5 ft., fitted with a half-span Fowler flap. The flap had a chord of 9 inches. Cross plane flow field velocity measurements were made at 0.6 c, (18 inches), downstream of the trailing edge of the flap. The baseline model was also tested with a three quarter-span slat, and a flap edge fence. The fence is designed to reduce noise from high-lift devices. The area of the flow encompassed within this investigation was 40 cm by 40 cm. A high resolution CCD Camera, (2048 pixels x 2048 pixels), was used to capture the double exposure images. The light source used in this experiment was a Spectra Physics PIV-400 Nd:Yag double pulsed laser, and the particle seeding was generated from a Roscoe 4500 fog machine. The velocity data obtained from the experiment was used to determine both the vorticity and the circulation.

  12. Cold Flows and Large Scale Tides

    NARCIS (Netherlands)

    Weygaert, R. van de; Hoffman, Y.

    1998-01-01

    Abstract: Several studies have indicated that the local cosmic velocity field is rather cold, in particular in the regions outside the massive, virialized clusters of galaxies. If our local cosmic environment is taken to be a representative volume of the Universe, the repercussion of this finding is

  13. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    The 'seeing' dependent contrast of the Hα pictures is the source of uncertainties during the measurements on ... Results of measurements and conclusions. Heliographic position of the filaments is measured on the full disc Hα pictures taken ... consecutive magnetic synoptic charts. Two arrays of corresponding velocities are ...

  14. Large-scale stochasticity in Hamiltonian systems

    International Nuclear Information System (INIS)

    Escande, D.F.

    1982-01-01

    Large scale stochasticity (L.S.S.) in Hamiltonian systems is defined on the paradigm Hamiltonian H(v,x,t) =v 2 /2-M cos x-P cos k(x-t) which describes the motion of one particle in two electrostatic waves. A renormalization transformation Tsub(r) is described which acts as a microscope that focusses on a given KAM (Kolmogorov-Arnold-Moser) torus in phase space. Though approximate, Tsub(r) yields the threshold of L.S.S. in H with an error of 5-10%. The universal behaviour of KAM tori is predicted: for instance the scale invariance of KAM tori and the critical exponent of the Lyapunov exponent of Cantori. The Fourier expansion of KAM tori is computed and several conjectures by L. Kadanoff and S. Shenker are proved. Chirikov's standard mapping for stochastic layers is derived in a simpler way and the width of the layers is computed. A simpler renormalization scheme for these layers is defined. A Mathieu equation for describing the stability of a discrete family of cycles is derived. When combined with Tsub(r), it allows to prove the link between KAM tori and nearby cycles, conjectured by J. Greene and, in particular, to compute the mean residue of a torus. The fractal diagrams defined by G. Schmidt are computed. A sketch of a methodology for computing the L.S.S. threshold in any two-degree-of-freedom Hamiltonian system is given. (Auth.)

  15. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  16. Large scale molecular simulations of nanotoxicity.

    Science.gov (United States)

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.

  17. Food appropriation through large scale land acquisitions

    International Nuclear Information System (INIS)

    Cristina Rulli, Maria; D’Odorico, Paolo

    2014-01-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300–550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190–370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations. (letter)

  18. Large-scale tides in general relativity

    Energy Technology Data Exchange (ETDEWEB)

    Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  19. Large scale digital atlases in neuroscience

    Science.gov (United States)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  20. Food appropriation through large scale land acquisitions

    Science.gov (United States)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  1. Gravitational lensing

    CERN Document Server

    Dodelson, Scott

    2017-01-01

    Gravitational lensing is a consequence of general relativity, where the gravitational force due to a massive object bends the paths of light originating from distant objects lying behind it. Using very little general relativity and no higher level mathematics, this text presents the basics of gravitational lensing, focusing on the equations needed to understand the phenomena. It then applies them to a diverse set of topics, including multiply imaged objects, time delays, extrasolar planets, microlensing, cluster masses, galaxy shape measurements, cosmic shear, and lensing of the cosmic microwave background. This approach allows undergraduate students and others to get quickly up to speed on the basics and the important issues. The text will be especially relevant as large surveys such as LSST and Euclid begin to dominate the astronomical landscape. Designed for a one semester course, it is accessible to anyone with two years of undergraduate physics background.

  2. On soft limits of large-scale structure correlation functions

    International Nuclear Information System (INIS)

    Sagunski, Laura

    2016-08-01

    Large-scale structure surveys have the potential to become the leading probe for precision cosmology in the next decade. To extract valuable information on the cosmological evolution of the Universe from the observational data, it is of major importance to derive accurate theoretical predictions for the statistical large-scale structure observables, such as the power spectrum and the bispectrum of (dark) matter density perturbations. Hence, one of the greatest challenges of modern cosmology is to theoretically understand the non-linear dynamics of large-scale structure formation in the Universe from first principles. While analytic approaches to describe the large-scale structure formation are usually based on the framework of non-relativistic cosmological perturbation theory, we pursue another road in this thesis and develop methods to derive generic, non-perturbative statements about large-scale structure correlation functions. We study unequal- and equal-time correlation functions of density and velocity perturbations in the limit where one of their wavenumbers becomes small, that is, in the soft limit. In the soft limit, it is possible to link (N+1)-point and N-point correlation functions to non-perturbative 'consistency conditions'. These provide in turn a powerful tool to test fundamental aspects of the underlying theory at hand. In this work, we first rederive the (resummed) consistency conditions at unequal times by using the so-called eikonal approximation. The main appeal of the unequal-time consistency conditions is that they are solely based on symmetry arguments and thus are universal. Proceeding from this, we direct our attention to consistency conditions at equal times, which, on the other hand, depend on the interplay between soft and hard modes. We explore the existence and validity of equal-time consistency conditions within and beyond perturbation theory. For this purpose, we investigate the predictions for the soft limit of the

  3. Large scale structure statistics: Finite volume effects

    Science.gov (United States)

    Colombi, S.; Bouchet, F. R.; Schaeffer, R.

    1994-01-01

    We study finite volume effects on the count probability distribution function PN(l) and the averaged Q-body correlations Xi-barQ (2 less than or = Q less than or equal 5). These statistics are computed for cubic cells, of size l. We use as an example the case of the matter distribution of a cold dark matter (CDM) universe involving approximately 3 x 105 particles. The main effect of the finiteness of the sampled volume is to induce an abrupt cut-off on the function PN(l) at large N. This clear signature makes an analysis of the consequences easy, and one can envisage a correction procedure. As a matter of fact, we demonstrate how an unfair sample can strongly affect the estimates of the functions Xi-barQ for Q greater than or = 3 (and decrease the measured zero of the two-body correlation function). We propose a method to correct for this are fact, or at least to evaluate the corresponding errors. We show that the correlations are systematically underestimated by direct measurements. We find that, once corrected, the statistical properties of the CDM universe appear compatible with the scaling relation SQ identically equals Xi-bar2 exp Q-1 = constant with respect to scale, in the non-linear regime; it was not the case with direct measurments. However, we note a deviation from scaling at scales close to the correlation length. It is probably due to the transition between the highly non-linear regime and the weakly correlated regime, where the functions SQ also seem to present a plateau. We apply the same procedure to simulations with hot dark matter (HDM) and white noise initial conditions, with similar results. Our method thus provides the first accurate measurement of the normalized skewness, S3, and the normalized kurtosis, S4, for three typical models of large scale structure formation in an expanding universe.

  4. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  5. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  6. Developing Large-Scale Bayesian Networks by Composition

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale...

  7. Acoustic lenses

    International Nuclear Information System (INIS)

    Kittmer, C.A.

    1983-03-01

    Acoustic lenses focus ultrasound to produce pencil-like beams with reduced near fields. When fitted to conventional (flat-faced) transducers, such lenses greatly improve the ability to detect and size defects. This paper describes a program developed to design acoustic lenses for use in immersion or contact inspection, using normal or angle beam mode with flat or curved targets. Lens surfaces are circular in geometry to facilitate machining. For normal beam inspection of flat plate, spherical or cylindrical lenses are used. For angle beam or curved surface inspections, a compound lens is required to correct for the extra induced aberration. Such a lens is aspherical with one radius of curvature in the plane of incidence, and a different radius of curvature in the plane perpendicular to the incident plane. The resultant beam profile (i.e., location of the acoustic focus, beam diameter, 6 dB working range) depends on the degree of focusing and the transducer used. The operating frequency and bandwidth can be affected by the instrumentation used. Theoretical and measured beam profiles are in good agreement. Various applications, from zone focusing used for defect sizing in thick plate, to line focusing for pipe weld inspection, are discussed

  8. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  9. Large-scale fuel cycle centres

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The US Nuclear Regulatory Commission (NRC) has considered the nuclear energy centre concept for fuel cycle plants in the Nuclear Energy Centre Site Survey 1975 (NECSS-75) Rep. No. NUREG-0001, an important study mandated by the US Congress in the Energy Reorganization Act of 1974 which created the NRC. For this study, the NRC defined fuel cycle centres as consisting of fuel reprocessing and mixed-oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle centre sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000-300,000MW(e). The types of fuel cycle facilities located at the fuel cycle centre permit the assessment of the role of fuel cycle centres in enhancing the safeguard of strategic special nuclear materials - plutonium and mixed oxides. Siting fuel cycle centres presents a smaller problem than siting reactors. A single reprocessing plant of the scale projected for use in the USA (1500-2000t/a) can reprocess fuel from reactors producing 50,000-65,000MW(e). Only two or three fuel cycle centres of the upper limit size considered in the NECSS-75 would be required in the USA by the year 2000. The NECSS-75 fuel cycle centre evaluation showed that large-scale fuel cycle centres present no real technical siting difficulties from a radiological effluent and safety standpoint. Some construction economies may be achievable with fuel cycle centres, which offer opportunities to improve waste-management systems. Combined centres consisting of reactors and fuel reprocessing and mixed-oxide fuel fabrication plants were also studied in the NECSS. Such centres can eliminate shipment not only of Pu but also mixed-oxide fuel. Increased fuel cycle costs result from implementation of combined centres unless the fuel reprocessing plants are commercial-sized. Development of Pu-burning reactors could reduce any economic penalties of combined centres. The need for effective fissile

  10. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse problems * partially separable problems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior -point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  11. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  12. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  13. Weak lensing and dark energy

    International Nuclear Information System (INIS)

    Huterer, Dragan

    2002-01-01

    We study the power of upcoming weak lensing surveys to probe dark energy. Dark energy modifies the distance-redshift relation as well as the matter power spectrum, both of which affect the weak lensing convergence power spectrum. Some dark-energy models predict additional clustering on very large scales, but this probably cannot be detected by weak lensing alone due to cosmic variance. With reasonable prior information on other cosmological parameters, we find that a survey covering 1000 sq deg down to a limiting magnitude of R=27 can impose constraints comparable to those expected from upcoming type Ia supernova and number-count surveys. This result, however, is contingent on the control of both observational and theoretical systematics. Concentrating on the latter, we find that the nonlinear power spectrum of matter perturbations and the redshift distribution of source galaxies both need to be determined accurately in order for weak lensing to achieve its full potential. Finally, we discuss the sensitivity of the three-point statistics to dark energy

  14. Large Scale Cleaning Telescope Mirrors with Electron Beams Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Cleaning Lenses and Mirrored Surfaces with Electrons tasks include: Development of Fractal Wand Geometries; Vacuum Chamber testing for Fractal Wand Prototypes;...

  15. Effects of baryons on the statistical properties of large scale structure of the Universe

    International Nuclear Information System (INIS)

    Guillet, T.

    2010-01-01

    Observations of weak gravitational lensing will provide strong constraints on the cosmic expansion history and the growth rate of large scale structure, yielding clues to the properties and nature of dark energy. Their interpretation is impacted by baryonic physics, which are expected to modify the total matter distribution at small scales. My work has focused on determining and modeling the impact of baryons on the statistics of the large scale matter distribution in the Universe. Using numerical simulations, I have extracted the effect of baryons on the power spectrum, variance and skewness of the total density field as predicted by these simulations. I have shown that a model based on the halo model construction, featuring a concentrated central component to account for cool condensed baryons, is able to reproduce accurately, and down to very small scales, the measured amplifications of both the variance and skewness of the density field. Because of well-known issues with baryons in current cosmological simulations, I have extended the central component model to rely on as many observation-based ingredients as possible. As an application, I have studied the effect of baryons on the predictions of the upcoming Euclid weak lensing survey. During the course of this work, I have also worked at developing and extending the RAMSES code, in particular by developing a parallel self-gravity solver, which offers significant performance gains, in particular for the simulation of some astrophysical setups such as isolated galaxy or cluster simulations. (author) [fr

  16. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  17. The Scales of Gravitational Lensing

    Directory of Open Access Journals (Sweden)

    Francesco De Paolis

    2016-03-01

    Full Text Available After exactly a century since the formulation of the general theory of relativity, the phenomenon of gravitational lensing is still an extremely powerful method for investigating in astrophysics and cosmology. Indeed, it is adopted to study the distribution of the stellar component in the Milky Way, to study dark matter and dark energy on very large scales and even to discover exoplanets. Moreover, thanks to technological developments, it will allow the measure of the physical parameters (mass, angular momentum and electric charge of supermassive black holes in the center of ours and nearby galaxies.

  18. The large-scale isolated disturbances dynamics in the main peak of electronic concentration of ionosphere

    Science.gov (United States)

    Kalinin, U. K.; Romanchuk, A. A.; Sergeenko, N. P.; Shubin, V. N.

    2003-07-01

    The vertical sounding data at chains of ionosphere stations are used to obtain relative variations of electron concentration in the F2 ionosphere region. Specific isolated traveling large-scale irregularities are distinguished in the diurnal succession of the fcF2 relative variations records. The temporal shifts of the irregularities at the station chains determine their motion velocity (of the order of speed of sound) and spatial scale (of order of 3000-5000km, the trajectory length being up to 10000km). The motion trajectories of large-scale isolated irregularities which had preceded the earthquakes are reconstructed.

  19. Networking in a Large-Scale Distributed Agile Project

    OpenAIRE

    Moe, Nils Brede; Šmite, Darja; Šāblis, Aivars; Börjesson, Anne-Lie; Andréasson, Pia

    2014-01-01

    Context: In large-scale distributed software projects the expertise may be scattered across multiple locations. Goal: We describe and discuss a large-scale distributed agile project at Ericsson, a multinational telecommunications company headquartered in Sweden. The project is distributed across four development locations (one in Sweden, one in Korea and two in China) and employs 17 teams. In such a large scale environment the challenge is to have as few dependences between teams as possible,...

  20. EFT of large scale structures in redshift space

    Science.gov (United States)

    Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun

    2018-03-01

    We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.

  1. How large-scale subsidence affects stratocumulus transitions

    Directory of Open Access Journals (Sweden)

    J. J. van der Dussen

    2016-01-01

    Full Text Available Some climate modeling results suggest that the Hadley circulation might weaken in a future climate, causing a subsequent reduction in the large-scale subsidence velocity in the subtropics. In this study we analyze the cloud liquid water path (LWP budget from large-eddy simulation (LES results of three idealized stratocumulus transition cases, each with a different subsidence rate. As shown in previous studies a reduced subsidence is found to lead to a deeper stratocumulus-topped boundary layer, an enhanced cloud-top entrainment rate and a delay in the transition of stratocumulus clouds into shallow cumulus clouds during its equatorwards advection by the prevailing trade winds. The effect of a reduction of the subsidence rate can be summarized as follows. The initial deepening of the stratocumulus layer is partly counteracted by an enhanced absorption of solar radiation. After some hours the deepening of the boundary layer is accelerated by an enhancement of the entrainment rate. Because this is accompanied by a change in the cloud-base turbulent fluxes of moisture and heat, the net change in the LWP due to changes in the turbulent flux profiles is negligibly small.

  2. Inflation and large scale structure formation after COBE

    International Nuclear Information System (INIS)

    Schaefer, R.K.; Shafi, Q.

    1992-06-01

    The simplest realizations of the new inflationary scenario typically give rise to primordial density fluctuations which deviate logarithmically from the scale free Harrison-Zeldovich spectrum. We consider a number of such examples and, in each case we normalize the amplitude of the fluctuations with the recent COBE measurement of the microwave background anisotropy. The predictions for the bulk velocities as well as anisotropies on smaller (1-2 degrees) angular scales are compared with the Harrison-Zeldovich case. Deviations from the latter range from a few to about 15 percent. We also estimate the redshift beyond which the quasars would not be expected to be seen. The inflationary quasar cutoff redshifts can vary by as much as 25% from the Harrison-Zeldovich case. We find that the inflationary scenario provides a good starting point for a theory of large scale structure in the universe provided the dark matter is a combination of cold plus (10-30%) hot components. (author). 27 refs, 1 fig., 1 tab

  3. Bayesian Inversion for Large Scale Antarctic Ice Sheet Flow

    KAUST Repository

    Ghattas, Omar

    2015-01-07

    The flow of ice from the interior of polar ice sheets is the primary contributor to projected sea level rise. One of the main difficulties faced in modeling ice sheet flow is the uncertain spatially-varying Robin boundary condition that describes the resistance to sliding at the base of the ice. Satellite observations of the surface ice flow velocity, along with a model of ice as a creeping incompressible shear-thinning fluid, can be used to infer this uncertain basal boundary condition. We cast this ill-posed inverse problem in the framework of Bayesian inference, which allows us to infer not only the basal sliding parameters, but also the associated uncertainty. To overcome the prohibitive nature of Bayesian methods for large-scale inverse problems, we exploit the fact that, despite the large size of observational data, they typically provide only sparse information on model parameters. We show results for Bayesian inversion of the basal sliding parameter field for the full Antarctic continent, and demonstrate that the work required to solve the inverse problem, measured in number of forward (and adjoint) ice sheet model solves, is independent of the parameter and data dimensions

  4. Storm induced large scale TIDs observed in GPS derived TEC

    Directory of Open Access Journals (Sweden)

    C. Borries

    2009-04-01

    Full Text Available This work is a first statistical analysis of large scale traveling ionospheric disturbances (LSTID in Europe using total electron content (TEC data derived from GNSS measurements. The GNSS receiver network in Europe is dense enough to map the ionospheric perturbation TEC with high horizontal resolution. The derived perturbation TEC maps are analysed studying the effect of space weather events on the ionosphere over Europe. Equatorward propagating storm induced wave packets have been identified during several geomagnetic storms. Characteristic parameters such as velocity, wavelength and direction were estimated from the perturbation TEC maps. Showing a mean wavelength of 2000 km, a mean period of 59 min and a phase speed of 684 ms−1 in average, the perturbations are allocated to LSTID. The comparison to LSTID observed over Japan shows an equal wavelength but a considerably faster phase speed. This might be attributed to the differences in the distance to the auroral region or inclination/declination of the geomagnetic field lines. The observed correlation between the LSTID amplitudes and the Auroral Electrojet (AE indicates that most of the wave like perturbations are exited by Joule heating. Particle precipitation effects could not be separated.

  5. Storm induced large scale TIDs observed in GPS derived TEC

    Directory of Open Access Journals (Sweden)

    C. Borries

    2009-04-01

    Full Text Available This work is a first statistical analysis of large scale traveling ionospheric disturbances (LSTID in Europe using total electron content (TEC data derived from GNSS measurements. The GNSS receiver network in Europe is dense enough to map the ionospheric perturbation TEC with high horizontal resolution. The derived perturbation TEC maps are analysed studying the effect of space weather events on the ionosphere over Europe.

    Equatorward propagating storm induced wave packets have been identified during several geomagnetic storms. Characteristic parameters such as velocity, wavelength and direction were estimated from the perturbation TEC maps. Showing a mean wavelength of 2000 km, a mean period of 59 min and a phase speed of 684 ms−1 in average, the perturbations are allocated to LSTID. The comparison to LSTID observed over Japan shows an equal wavelength but a considerably faster phase speed. This might be attributed to the differences in the distance to the auroral region or inclination/declination of the geomagnetic field lines.

    The observed correlation between the LSTID amplitudes and the Auroral Electrojet (AE indicates that most of the wave like perturbations are exited by Joule heating. Particle precipitation effects could not be separated.

  6. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  7. Large-scale turbulence structures in shallow separating flows

    NARCIS (Netherlands)

    Talstra, H.

    2011-01-01

    The Ph.D. thesis “Large-scale turbulence structures in shallow separating flows” by Harmen Talstra is the result of a Ph.D. research project on large-scale shallow-flow turbulence, which has been performed in the Environmental Fluid Mechanics Laboratory at Delft University of Technology. The

  8. Advances in Modelling of Large Scale Coastal Evolution

    NARCIS (Netherlands)

    Stive, M.J.F.; De Vriend, H.J.

    1995-01-01

    The attention for climate change impact on the world's coastlines has established large scale coastal evolution as a topic of wide interest. Some more recent advances in this field, focusing on the potential of mathematical models for the prediction of large scale coastal evolution, are discussed.

  9. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    were the main reasons for the failure of large-scale jatropha plantations in Ethiopia. The findings in Chapter 3 show that when the use of family labour is combined with easy access to credit and technology, outgrowers on average achieve higher productivity than that obtained on large-scale plantations...

  10. The Large Scale Magnetic Field and Sunspot Cycles

    Indian Academy of Sciences (India)

    tribpo

    We report on the correlation between the large scale magnetic field and sunspot cycles during the last 80 years that was found by Makarov et al. (1999) and Makarov. & Tlatov (2000) in H α spherical harmonics of the large scale magnetic field for. 1915 1999. The sum of intensities of the low modes l = 1 and 3, A(t), was used ...

  11. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  12. ACTIVE DIMENSIONAL CONTROL OF LARGE-SCALED STEEL STRUCTURES

    Directory of Open Access Journals (Sweden)

    Radosław Rutkowski

    2013-09-01

    Full Text Available The article discusses the issues of dimensional control in the construction process of large-scaled steel structures. The main focus is on the analysis of manufacturing tolerances. The article presents the procedure of tolerance analysis usage in process of design and manufacturing of large-scaled steel structures. The proposed solution could significantly improve the manufacturing process.

  13. Large Scale Cleaning Telescope Mirrors with Electron Beams Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Cleaning Lenses and Mirrored Surfaces with Electrons tasks include: Development of Fractal Wand Geometries; Vacuum Chamber testing of Fractal Wand...

  14. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  15. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  16. Contribution of large-scale coherent structures towards the cross flow in two interconnected channels

    International Nuclear Information System (INIS)

    Mahmood, A.; Rohde, M.; Hagen, T.H.J.J. van der; Mudde, R.F.

    2009-01-01

    Single phase cross flow through a gap region joining two vertical channels has been investigated experimentally for Reynolds numbers, based on the channels hydraulic diameter, ranging from 850 to 21000. The flow field in the gap region is investigated by 2D-PIV and the inter channel mass transfer is quantified by the tracer injection method. Experiments carried out for variable gap heights and shape show the existence of a street of large-scale counter rotating vortices on either side of the channel-gap interface, resulting from the mean velocity gradient in the gap and the main channel region. The appearance of the coherent vortices is subject to a threshold associated with the difference between the maximum and the minimum average stream wise velocities in the channel and the gap region, respectively. The auto power spectral density of the cross velocity component in the gap region exhibits a slope of -3 in the inertial range, indicating the 2D nature of these vortices. The presence of the large-scale vortices enhances the mass transfer through the gap region by approximately 63% of the mass transferred by turbulent mixing alone. The inter-channel mass transfer, due to cross flow, is found to be dependent not only on the large-scale vortices characteristics, but also on the gap geometry. (author)

  17. Robust scene stitching in large scale mobile mapping

    OpenAIRE

    Schouwenaars, Filip; Timofte, Radu; Van Gool, Luc

    2013-01-01

    Schouwenaars F., Timofte R., Van Gool L., ''Robust scene stitching in large scale mobile mapping'', 24th British machine vision conference - BMVC 2013, 11 pp., September 9-13, 2013, Bristol, United Kingdom.

  18. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  19. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  20. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  1. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  2. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  3. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses on ...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....

  4. Achieving Agility and Stability in Large-Scale Software Development

    Science.gov (United States)

    2013-01-16

    question Which software development process are you currently using? 1. Agile software development (e.g., using Scrum , XP practices, test-driven...Architecting in a Complex World Twitter #SEIVirtualEvent © 2013 Carnegie Mellon University Achieving Agility and Stability in Large-Scale...staff in the Research, Technology, and System Solutions Program at the SEI. She is currently engaged in activities focusing on large scale agile

  5. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  6. Emergence of coherent structures and large-scale flows in motile suspensions.

    Science.gov (United States)

    Saintillan, David; Shelley, Michael J

    2012-03-07

    The emergence of coherent structures, large-scale flows and correlated dynamics in suspensions of motile particles such as swimming micro-organisms or artificial microswimmers is studied using direct particle simulations. A detailed model is proposed for a slender rod-like particle that propels itself in a viscous fluid by exerting a prescribed tangential stress on its surface, and a method is devised for the efficient calculation of hydrodynamic interactions in large-scale suspensions of such particles using slender-body theory and a smooth particle-mesh Ewald algorithm. Simulations are performed with periodic boundary conditions for various system sizes and suspension volume fractions, and demonstrate a transition to large-scale correlated motions in suspensions of rear-actuated swimmers, or Pushers, above a critical volume fraction or system size. This transition, which is not observed in suspensions of head-actuated swimmers, or Pullers, is seen most clearly in particle velocity and passive tracer statistics. These observations are consistent with predictions from our previous mean-field kinetic theory, one of which states that instabilities will arise in uniform isotropic suspensions of Pushers when the product of the linear system size with the suspension volume fraction exceeds a given threshold. We also find that the collective dynamics of Pushers result in giant number fluctuations, local alignment of swimmers and strongly mixing flows. Suspensions of Pullers, which evince no large-scale dynamics, nonetheless display interesting deviations from the random isotropic state.

  7. Pulsar lensing geometry

    Science.gov (United States)

    Liu, Siqi; Pen, Ue-Li; Macquart, J.-P.; Brisken, Walter; Deller, Adam

    2016-05-01

    We test the inclined sheet pulsar scintillation model (Pen & Levin) against archival very long baseline interferometry (VLBI) data on PSR 0834+06 and show that its scintillation properties can be precisely reproduced by a model in which refraction occurs on two distinct lens planes. These data strongly favour a model in which grazing-incidence refraction instead of diffraction off turbulent structures is the primary source of pulsar scattering. This model can reproduce the parameters of the observed diffractive scintillation with an accuracy at the percent level. Comparison with new VLBI proper motion results in a direct measure of the ionized interstellar medium (ISM) screen transverse velocity. The results are consistent with ISM velocities local to the PSR 0834+06 sight-line (through the Galaxy). The simple 1-D structure of the lenses opens up the possibility of using interstellar lenses as precision probes for pulsar lens mapping, precision transverse motions in the ISM, and new opportunities for removing scattering to improve pulsar timing. We describe the parameters and observables of this double screen system. While relative screen distances can in principle be accurately determined, a global conformal distance degeneracy exists that allows a rescaling of the absolute distance scale. For PSR B0834+06, we present VLBI astrometry results that provide (for the first time) a direct measurement of the distance of the pulsar. For most of the recycled millisecond pulsars that are the targets of precision timing observations, the targets where independent distance measurements are not available. The degeneracy presented in the lens modelling could be broken if the pulsar resides in a binary system.

  8. Inflation Physics from the Cosmic Microwave Background and Large Scale Structure

    Science.gov (United States)

    Abazajian, K.N.; Arnold,K.; Austermann, J.; Benson, B.A.; Bischoff, C.; Bock, J.; Bond, J.R.; Borrill, J.; Buder, I.; Burke, D.L.; hide

    2013-01-01

    Fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments---the theory of cosmic inflation---and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1 of the sky to a depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5-sigma measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B-mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.

  9. The large-scale environment from cosmological simulations - I. The baryonic cosmic web

    Science.gov (United States)

    Cui, Weiguang; Knebe, Alexander; Yepes, Gustavo; Yang, Xiaohu; Borgani, Stefano; Kang, Xi; Power, Chris; Staveley-Smith, Lister

    2018-01-01

    Using a series of cosmological simulations that includes one dark-matter-only (DM-only) run, one gas cooling-star formation-supernova feedback (CSF) run and one that additionally includes feedback from active galactic nuclei (AGNs), we classify the large-scale structures with both a velocity-shear-tensor code (VWEB) and a tidal-tensor code (PWEB). We find that the baryonic processes have almost no impact on large-scale structures - at least not when classified using aforementioned techniques. More importantly, our results confirm that the gas component alone can be used to infer the filamentary structure of the universe practically un-biased, which could be applied to cosmology constraints. In addition, the gas filaments are classified with its velocity (VWEB) and density (PWEB) fields, which can theoretically connect to the radio observations, such as H I surveys. This will help us to bias-freely link the radio observations with dark matter distributions at large scale.

  10. Detonation States without a Transition in the Super Large Scale Gap Test

    Science.gov (United States)

    Sandusky, Harold; Church, Samantha; Felts, Joshua; Detonation Science Collaboration

    2017-06-01

    At or above the critical pressure for shock-to-detonation transition (SDT), there is a run up in shock velocity before a distinct change to high-velocity detonation (HVD). If below that critical pressure a slower supersonic wave, referred to as low-velocity detonation (LVD), sometimes steadily propagates with enough energy to punch a witness plate. This was observed for sample diameters ranging from 36.5 mm in the large scale gap test to 177.8 mm in the super large scale gap test (SLSGT). Recent SLSGTs on an extremely insensitive explosive with a critical diameter >100 mm exhibited HVD without SDT for no gap and LVD with decreasing velocity for longer gaps. These reactive shocks commenced from the donor input and continued steadily. This unique response suggests behavior more like a mass-deflagrating propellant. It is speculated that the large SLSGT diameter in conjunction with the confinement of a steel tube permits more time for shock reaction to occur before quenching by lateral rarefactions. Traditional GO/NOGO determinations do not apply for shock insensitive materials that require evaluation with the largest of the standardized tests, which has implications for both hazard classification and booster requirements.

  11. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  12. Acoustic Studies of the Large Scale Ocean Circulation

    Science.gov (United States)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  13. Privacy Preserving Large-Scale Rating Data Publishing

    Directory of Open Access Journals (Sweden)

    Xiaoxun Sun

    2013-02-01

    Full Text Available Large scale rating data usually contains both ratings of sensitive and non-sensitive issues, and the ratings of sensitive issues belong to personal privacy. Even when survey participants do not reveal any of their ratings, their survey records are potentially identifiable by using information from other public sources. In order to protect the privacy in the large-scale rating data, it is important to propose new privacy principles which consider the properties of the rating data. Moreover, given the privacy principle, how to efficiently determine whether the rating data satisfied the required privacy principle is crucial as well. Furthermore, if the privacy principle is not satisfied, an efficient method is needed to securely publish the large-scale rating data. In this paper, all these problem will be addressed.

  14. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  15. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    Science.gov (United States)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  16. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  17. Experimental investigation of large-scale vortices in a freely spreading gravity current

    Science.gov (United States)

    Yuan, Yeping; Horner-Devine, Alexander R.

    2017-10-01

    A series of laboratory experiments are presented to compare the dynamics of constant-source buoyant gravity currents propagating into laterally confined (channelized) and unconfined (spreading) environments. The plan-form structure of the spreading current and the vertical density and velocity structures on the interface are quantified using the optical thickness method and a combined particle image velocimetry and planar laser-induced fluorescence method, respectively. With lateral boundaries, the buoyant current thickness is approximately constant and Kelvin-Helmholtz instabilities are generated within the shear layer. The buoyant current structure is significantly different in the spreading case. As the current spreads laterally, nonlinear large-scale vortex structures are observed at the interface, which maintain a coherent shape as they propagate away from the source. These structures are continuously generated near the river mouth, have amplitudes close to the buoyant layer thickness, and propagate offshore at speeds approximately equal to the internal wave speed. The observed depth and propagation speed of the instabilities match well with the fastest growing mode predicted by linear stability analysis, but with a shorter wavelength. The spreading flows have much higher vorticity, which is aggregated within the large-scale structures. Secondary instabilities are generated on the leading edge of the braids between the large-scale vortex structures and ultimately break and mix on the lee side of the structures. Analysis of the vortex dynamics shows that lateral stretching intensifies the vorticity in the spreading currents, contributing to higher vorticity within the large-scale structures in the buoyant plume. The large-scale instabilities and vortex structures observed in the present study provide new insights into the origin of internal frontal structures frequently observed in coastal river plumes.

  18. Large-Scale Inverse Problems and Quantification of Uncertainty

    CERN Document Server

    Biegler, Lorenz; Ghattas, Omar

    2010-01-01

    Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca

  19. Large-scale liquid scintillation detectors for solar neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Benziger, Jay B.; Calaprice, Frank P. [Princeton University Princeton, Princeton, NJ (United States)

    2016-04-15

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed. (orig.)

  20. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....

  1. The survey of large-scale query classification

    Science.gov (United States)

    Zhou, Sanduo; Cheng, Kefei; Men, Lijun

    2017-04-01

    In recent years, a lot of researches have been done on query classification. The paper introduces the recent researches on query classification in detail, mainly including the source of query log, the category systems, the feature extraction methods, classification methods and the evaluation methodology. Then it discusses the issues of large-scale query classification and the solved methods combined with big data analysis systems. The research result shows there still are several problems and challenges, such as lack of authoritative classification system and evaluation methodology, efficiency of the feature extraction method, uncertainty of the performance on large-scale query log and the further query classification on the big data platform, etc.

  2. The CLASSgal code for Relativistic Cosmological Large Scale Structure

    CERN Document Server

    Di Dio, Enea; Lesgourgues, Julien; Durrer, Ruth

    2013-01-01

    We present some accurate and efficient computations of large scale structure observables, obtained with a modified version of the CLASS code which is made publicly available. This code includes all relativistic corrections and computes both the power spectrum Cl(z1,z2) and the corresponding correlation function xi(theta,z1,z2) in linear perturbation theory. For Gaussian initial perturbations, these quantities contain the full information encoded in the large scale matter distribution at the level of linear perturbation theory. We illustrate the usefulness of our code for cosmological parameter estimation through a few simple examples.

  3. [Issues of large scale tissue culture of medicinal plant].

    Science.gov (United States)

    Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai

    2014-09-01

    In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.

  4. Participatory Design of Large-Scale Information Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    In this article we discuss how to engage in large-scale information systems development by applying a participatory design (PD) approach that acknowledges the unique situated work practices conducted by the domain experts of modern organizations. We reconstruct the iterative prototyping approach...... into a PD process model that (1) emphasizes PD experiments as transcending traditional prototyping by evaluating fully integrated systems exposed to real work practices; (2) incorporates improvisational change management including anticipated, emergent, and opportunity-based change; and (3) extends initial...... and discuss three challenges to address when dealing with large-scale systems development....

  5. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...... with high wind power penetration. This paper presents a review of the electricity storage technologies relevant for large power systems. The paper also presents an estimation of the economic feasibility of electricity storage using the west Danish power market area as a case....

  6. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  7. Renormalization-group flow of the effective action of cosmological large-scale structures

    CERN Document Server

    Floerchinger, Stefan

    2017-01-01

    Following an approach of Matarrese and Pietroni, we derive the functional renormalization group (RG) flow of the effective action of cosmological large-scale structures. Perturbative solutions of this RG flow equation are shown to be consistent with standard cosmological perturbation theory. Non-perturbative approximate solutions can be obtained by truncating the a priori infinite set of possible effective actions to a finite subspace. Using for the truncated effective action a form dictated by dissipative fluid dynamics, we derive RG flow equations for the scale dependence of the effective viscosity and sound velocity of non-interacting dark matter, and we solve them numerically. Physically, the effective viscosity and sound velocity account for the interactions of long-wavelength fluctuations with the spectrum of smaller-scale perturbations. We find that the RG flow exhibits an attractor behaviour in the IR that significantly reduces the dependence of the effective viscosity and sound velocity on the input ...

  8. Dynamics of the large-scale circulation in turbulent Rayleigh–Bénard convection with modulated rotation

    NARCIS (Netherlands)

    Zhong, J.Q.; Sterl, S.H.; Li, H.M.

    2015-01-01

    We present measurements of the azimuthal rotation velocity $\\dot{{\\it\\theta}}(t)$θ˙(t) and thermal amplitude ${\\it\\delta}(t)$δ(t) of the large-scale circulation in turbulent Rayleigh–Bénard convection with modulated rotation. Both $\\dot{{\\it\\theta}}(t)$θ˙(t) and ${\\it\\delta}(t)$δ(t) exhibit clear

  9. Success Factors of Large Scale ERP Implementation in Thailand

    OpenAIRE

    Rotchanakitumnuai; Siriluck

    2010-01-01

    The objectives of the study are to examine the determinants of ERP implementation success factors of ERP implementation. The result indicates that large scale ERP implementation success consist of eight factors: project management competence, knowledge sharing, ERP system quality , understanding, user involvement, business process re-engineering, top management support, organization readiness.

  10. Cost Overruns in Large-scale Transportation Infrastructure Projects

    DEFF Research Database (Denmark)

    Cantarelli, Chantal C; Flyvbjerg, Bent; Molin, Eric J. E

    2010-01-01

    Managing large-scale transportation infrastructure projects is difficult due to frequent misinformation about the costs which results in large cost overruns that often threaten the overall project viability. This paper investigates the explanations for cost overruns that are given in the literature...

  11. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 2. Fractals and the Large-Scale Structure in the Universe - Introduction and Basic Concepts. A K Mittal T R Seshadri. General Article Volume 7 Issue 2 February 2002 pp 6-19 ...

  12. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  13. Chain Analysis for large-scale Communication systems

    NARCIS (Netherlands)

    Grijpink, Jan|info:eu-repo/dai/nl/095130861

    2010-01-01

    The chain concept is introduced to explain how large-scale information infrastructures so often fail and sometimes even backfire. Next, the assessment framework of the doctrine of Chain-computerisation and its chain analysis procedure are outlined. In this procedure chain description precedes

  14. Water Implications of Large-Scale Land Acquisitions in Ghana

    Directory of Open Access Journals (Sweden)

    Timothy Olalekan Williams

    2012-06-01

    The paper offers recommendations which can help the government to achieve its stated objective of developing a "policy framework and guidelines for large-scale land acquisitions by both local and foreign investors for biofuels that will protect the interests of investors and the welfare of Ghanaian farmers and landowners".

  15. Origin of large-scale cell structure in the universe

    International Nuclear Information System (INIS)

    Zel'dovich, Y.B.

    1982-01-01

    A qualitative explanation is offered for the characteristic global structure of the universe, wherein ''black'' regions devoid of galaxies are surrounded on all sides by closed, comparatively thin, ''bright'' layers populated by galaxies. The interpretation rests on some very general arguments regarding the growth of large-scale perturbations in a cold gas

  16. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  17. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  18. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 4. Fractals and the Large-Scale Structure in the Universe - Is the Cosmological Principle Valid? A K Mittal T R Seshadri. General Article Volume 7 Issue 4 April 2002 pp 39-47 ...

  19. Evaluating Large-scale National Public Management Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Morten Balle

    This article explores differences and similarities between two evaluations of large-scale administrative reforms which were carried out in the 2000s: The evaluation of the Norwegian NAV reform (EVANAV) and the evaluation of the Danish Local Government Reform (LGR). We provide a comparative analysis...

  20. Invertebrates or iron: does large-scale opencast mining impact ...

    African Journals Online (AJOL)

    The results were, however, confounded by the fact that the resting eggs of pan inhabitants could remain dormant in the sediment for decades; suggesting that ... Similarly, the preservation of conservation areas and a landscape wide management system were proposed to ensure that large-scale ecological process are not ...

  1. Reconsidering Replication: New Perspectives on Large-Scale School Improvement

    Science.gov (United States)

    Peurach, Donald J.; Glazer, Joshua L.

    2012-01-01

    The purpose of this analysis is to reconsider organizational replication as a strategy for large-scale school improvement: a strategy that features a "hub" organization collaborating with "outlet" schools to enact school-wide designs for improvement. To do so, we synthesize a leading line of research on commercial replication to construct a…

  2. The large scale microwave background anisotropy in decaying particle cosmology

    International Nuclear Information System (INIS)

    Panek, M.

    1987-06-01

    We investigate the large-scale anisotropy of the microwave background radiation in cosmological models with decaying particles. The observed value of the quadrupole moment combined with other constraints gives an upper limit on the redshift of the decay z/sub d/ < 3-5. 12 refs., 2 figs

  3. Resolute large scale mining company contribution to health services of

    African Journals Online (AJOL)

    Introduction: In 1995 Tanzanian Government reformed the mining industry and the new policy allowed an involvement of multinational companies but the communities living near new large scale gold mines were expected to benefit from the industry in terms of socio economic, health, education, employment, safe drinking ...

  4. Large Scale Magnetic Fields: Density Power Spectrum in Redshift ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    netic fields to have significant impact on the large scale structure at present. Magnetic fields of a more recent ... are produced at the time of inflation in the very early universe. Larger surveys like the on-going ... fields and their impact on redshift space power spectrum and give our main results. In section 4 we summarize our ...

  5. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  6. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...... procedure with the dual variables....

  7. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Recent increases in commodity prices have led some governments and private investors to purchase or lease large tracts of land in foreign countries for producing their own food and biofuel. ... They will use the results to increase public awareness and knowledge about the consequences of large-scale land acquisitions.

  8. Description of a Large-Scale Micro-Teaching Program.

    Science.gov (United States)

    Webb, Clark; And Others

    This report describes the implementation of a large-scale program at Brigham Young University to provide for at least one microteaching experience for each of 730 students enrolled in a beginning education course. A definition of microteaching (the creation of a miniature teaching situation under controlled conditions) and the elements which make…

  9. Small and large scale genomic DNA isolation protocol for chickpea ...

    African Journals Online (AJOL)

    Small and large scale genomic DNA isolation protocol for chickpea ( Cicer arietinum L.), suitable for molecular marker and transgenic analyses. ... Chickpea is an important food legume crop with high nutritional value. Lack of appropriate DNA isolation protocol is a limiting factor for any molecular studies of this crop.

  10. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  11. The interaction of large scale and mesoscale environment leading to ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 118; Issue 5. The interaction of large scale and mesoscale environment leading to formation of intense thunderstorms over Kolkata. Part I: Doppler radar and satellite observations. P Mukhopadhyay M Mahakur H A K Singh. Volume 118 Issue 5 October 2009 pp ...

  12. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    The objective of this project is to test whether the Food and Agriculture Organization's Voluntary Guidelines on the Responsible Governance of Tenure of Land, Fisheries and Forests in the Context of National Food Security can help increase accountability for large-scale land acquisitions in Mali, Nigeria, Uganda, and South ...

  13. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...

  14. A Large-Scale Earth and Ocean Phenomenon

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 2. Tsunamis - A Large-Scale Earth and Ocean Phenomenon. Satish R Shetye. General Article Volume 10 Issue 2 February 2005 pp 8-19. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. Large-Scale Innovation and Change in UK Higher Education

    Science.gov (United States)

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  16. Factors Influencing Uptake of a Large Scale Curriculum Innovation.

    Science.gov (United States)

    Adey, Philip S.

    Educational research has all too often failed to be implemented on a large-scale basis. This paper describes the multiplier effect of a professional development program for teachers and for trainers in the United Kingdom, and how that program was developed, monitored, and evaluated. Cognitive Acceleration through Science Education (CASE) is a…

  17. Symmetry in stochasticity: Random walk models of large-scale ...

    Indian Academy of Sciences (India)

    This paper describes the insights gained from the excursion set approach, in which various questions about the phenomenology of large-scale structure formation can be mapped to problems associated with the first crossing distribution of appropriately defined barriers by random walks. Much of this is summarized in R K ...

  18. Solving large scale crew scheduling problems by using iterative partitioning

    NARCIS (Netherlands)

    E.J.W. Abbink (Erwin)

    2008-01-01

    textabstractThis paper deals with large-scale crew scheduling problems arising at the Dutch railway operator, Netherlands Railways (NS). NS operates about 30,000 trains a week. All these trains need a driver and a certain number of conductors. No available crew scheduling algorithm can solve such

  19. Solving Large Scale Crew Scheduling Problems in Practice

    NARCIS (Netherlands)

    E.J.W. Abbink (Erwin); L. Albino; T.A.B. Dollevoet (Twan); D. Huisman (Dennis); J. Roussado; R.L. Saldanha

    2010-01-01

    textabstractThis paper deals with large-scale crew scheduling problems arising at the Dutch railway operator, Netherlands Railways (NS). NS operates about 30,000 trains a week. All these trains need a driver and a certain number of guards. Some labor rules restrict the duties of a certain crew base

  20. The Large Scale Magnetic Field and Sunspot Cycles

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 161 162. The Large Scale Magnetic Field and Sunspot Cycles. V. I. Makarov* & A. G. Tlatov, Kislovodsk Solar Station of the Pulkovo Observatory,. Kislovodsk 357700, P.O. Box 145, Russia. *e mail: makarov@gao.spb.ru. Key words. Sun: magnetic field—sunspots—solar cycle. Extended abstract.

  1. Large-Scale Networked Virtual Environments: Architecture and Applications

    Science.gov (United States)

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  2. Information Tailoring Enhancements for Large-Scale Social Data

    Science.gov (United States)

    2016-09-26

    improved usability and navigation, (iii) improved the computational framework of Scraawl, (iv) enhanced Named Entity Recognition (NER), and (v...tailoring, large-scale analysis, OSINT 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER a. REPORT b. ABSTRACT c. THIS PAGE ABSTRACT OF PAGES...Improvements .................................................... 7 2.3 Upgrade Scraawl Computational Framework to Increase Robustness ....... 8 2.4

  3. Large-Scale Assessments and Educational Policies in Italy

    Science.gov (United States)

    Damiani, Valeria

    2016-01-01

    Despite Italy's extensive participation in most large-scale assessments, their actual influence on Italian educational policies is less easy to identify. The present contribution aims at highlighting and explaining reasons for the weak and often inconsistent relationship between international surveys and policy-making processes in Italy.…

  4. Large Scale Magnetic Fields: Density Power Spectrum in Redshift ...

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... Our analysis shows that if these magnetic fields originated in the early universe then it is possible to construct models for which the shape of the power spectrum agrees with the large scale slope of the observed power spectrum. However requiring compatibility with observed CMBR anisotropies, the ...

  5. The Large Scale Structure: Polarization Aspects R. F. Pizzo

    Indian Academy of Sciences (India)

    c Indian Academy of Sciences. The Large Scale Structure: Polarization Aspects. R. F. Pizzo. ASTRON, Postbus 2, 7990 AA Dwingeloo, The Netherlands e-mail: pizzo@astron.nl. Abstract. Polarized radio emission is detected at various scales in the. Universe. In this document, I will briefly review our knowledge on polar-.

  6. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    lien from Harish-Chandra. Research Institute,. Allahabad. Areas of his interest include cosmic microwave background radiation, large scale structures in the Universe and application of fractals in these. A K Mittal and T R Seshadri. During the last decade it has been argued by some investigators that the distribution of galax-.

  7. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  8. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  9. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  10. Large scale synthesis and characterization of Ni nanoparticles by ...

    Indian Academy of Sciences (India)

    ... Refresher Courses · Symposia · Live Streaming. Home; Journals; Bulletin of Materials Science; Volume 31; Issue 1. Large scale synthesis and characterization of Ni nanoparticles by solution reduction method. Huazhi Wang Xinli Kou Jie Zhang Jiangong Li. Nanomaterials Volume 31 Issue 1 February 2008 pp 97-100 ...

  11. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured...

  12. Large-Scale Systems Control Design via LMI Optimization

    Czech Academy of Sciences Publication Activity Database

    Rehák, Branislav

    2015-01-01

    Roč. 44, č. 3 (2015), s. 247-253 ISSN 1392-124X Institutional support: RVO:67985556 Keywords : Combinatorial linear matrix inequalities * large-scale system * decentralized control Subject RIV: BC - Control Systems Theory Impact factor: 0.633, year: 2015

  13. Optimization of FTA technology for large scale plant DNA isolation ...

    African Journals Online (AJOL)

    GRACE

    2006-05-02

    May 2, 2006 ... Key words: FTATM, large-scale, DNA sampling, field set up, marker assisted selection. ... application. FTATM classic card (Whatman Inc., Clifton,. NJ) is a whatman paper that has been impregnated with a patented chemical formulation that lyses cells, .... bands for both normal agarose (data not shown) and.

  14. Firebrands and spotting ignition in large-scale fires

    Science.gov (United States)

    Eunmo Koo; Patrick J. Pagni; David R. Weise; John P. Woycheese

    2010-01-01

    Spotting ignition by lofted firebrands is a significant mechanism of fire spread, as observed in many largescale fires. The role of firebrands in fire propagation and the important parameters involved in spot fire development are studied. Historical large-scale fires, including wind-driven urban and wildland conflagrations and post-earthquake fires are given as...

  15. A large-scale industrial CT's data transfer system

    International Nuclear Information System (INIS)

    Chen Xuesong

    2004-01-01

    The large-scale industrial CT generates a large amount of data when it works. To guarantee the reliability of the real-time transfers of those data, the author designs a project by using WLAN technology. And it solves the bottleneck caused by the data rate limitation by using multi-thread technology. (author)

  16. Large scale sodium-water reaction tests for Monju steam generators

    International Nuclear Information System (INIS)

    Sato, M.; Hiroi, H.; Hori, M.

    1976-01-01

    To demonstrate the safe design of the steam generator system of the prototype fast reactor Monju against the postulated large leak sodium-water reaction, a large scale test facility SWAT-3 was constructed. SWAT-3 is a 1/2.5 scale model of the Monju secondary loop on the basis of the iso-velocity modeling. Two tests have been conducted in SWAT-3 since its construction. The test items using SWAT-3 are discussed, and the description of the facility and the test results are presented

  17. Random access in large-scale DNA data storage.

    Science.gov (United States)

    Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin

    2018-03-01

    Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.

  18. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension of the para......Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...... of the parameter vector. A new design matrix free algorithm is proposed for computing the penalized maximum likelihood estimate for GLAMs, which, in particular, handles nondifferentiable penalty functions. The proposed algorithm is implemented and available via the R package glamlasso. It combines several ideas...

  19. Mathematical programming methods for large-scale topology optimization problems

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana

    for the classical minimum compliance problem. Two of the state-of-the-art optimization algorithms are investigated and implemented for this structural topology optimization problem. A Sequential Quadratic Programming (TopSQP) and an interior point method (TopIP) are developed exploiting the specific mathematical...... structure of the problem. In both solvers, information of the exact Hessian is considered. A robust iterative method is implemented to efficiently solve large-scale linear systems. Both TopSQP and TopIP have successful results in terms of convergence, number of iterations, and objective function values....... Thanks to the use of the iterative method implemented, TopIP is able to solve large-scale problems with more than three millions degrees of freedom....

  20. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  1. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  2. The Large-scale Effect of Environment on Galactic Conformity

    Science.gov (United States)

    Sun, Shuangpeng; Guo, Qi; Wang, Lan; Wang, Jie; Gao, Liang; Lacey, Cedric G.; Pan, Jun

    2018-04-01

    We use a volume-limited galaxy sample from the SDSS Data Release 7 to explore the dependence of galactic conformity on the large-scale environment, measured on ˜ 4 Mpc scales. We find that the star formation activity of neighbour galaxies depends more strongly on the environment than on the activity of their primary galaxies. In under-dense regions most neighbour galaxies tend to be active, while in over-dense regions neighbour galaxies are mostly passive, regardless of the activity of their primary galaxies. At a given stellar mass, passive primary galaxies reside in higher density regions than active primary galaxies, leading to the apparently strong conformity signal. The dependence of the activity of neighbour galaxies on environment can be explained by the corresponding dependence of the fraction of satellite galaxies. Similar results are found for galaxies in a semi-analytical model, suggesting that no new physics is required to explain the observed large-scale conformity.

  3. Volume measurement study for large scale input accountancy tank

    International Nuclear Information System (INIS)

    Uchikoshi, Seiji; Watanabe, Yuichi; Tsujino, Takeshi

    1999-01-01

    Large Scale Tank Calibration (LASTAC) facility, including an experimental tank which has the same volume and structure as the input accountancy tank of Rokkasho Reprocessing Plant (RRP) was constructed in Nuclear Material Control Center of Japan. Demonstration experiments have been carried out to evaluate a precision of solution volume measurement and to establish the procedure of highly accurate pressure measurement for a large scale tank with dip-tube bubbler probe system to be applied to the input accountancy tank of RRP. Solution volume in a tank is determined from substitution the solution level for the calibration function obtained in advance, which express a relation between the solution level and its volume in the tank. Therefore, precise solution volume measurement needs a precise calibration function that is determined carefully. The LASTAC calibration experiments using pure water showed good result in reproducibility. (J.P.N.)

  4. Model for large scale circulation of nuclides in nature, 1

    Energy Technology Data Exchange (ETDEWEB)

    Ohnishi, Teruaki

    1988-12-01

    A model for large scale circulation of nuclides was developed, and a computer code named COCAIN was made which simulates this circulation system-dynamically. The natural environment considered in the present paper consists of 2 atmospheres, 8 geospheres and 2 lithospheres. The biosphere is composed of 4 types of edible plants, 5 cattles and their products, 4 water biota and 16 human organs. The biosphere is assumed to be given nuclides from the natural environment mentioned above. With the use of COCAIN, two numerical case studies were carried out; the one is the study on nuclear pollution in nature by the radioactive nuclides originating from the past nuclear bomb tests, and the other is the study on the response of environment and biota to the pulse injection of nuclides into one compartment. From the former case study it was verified that this model can well explain the observation and properly simulate the large scale circulation of nuclides in nature.

  5. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  6. Performance of Grey Wolf Optimizer on large scale problems

    Science.gov (United States)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  7. Reliability Evaluation considering Structures of a Large Scale Wind Farm

    DEFF Research Database (Denmark)

    Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei

    2012-01-01

    evaluation on wind farm is necessarily required. Also, because large scale offshore wind farm has a long repair time and a high repair cost as well as a high investment cost, it is essential to take into account the economic aspect. One of methods to efficiently build and to operate wind farm is to construct......Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and the scale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability...... wind farm which is able to enhance a capability of delivering a power instead of controlling an uncontrollable output of wind power. Therefore, this paper introduces a method to evaluate the reliability depending upon structures of wind farm and to reflect the result to the planning stage of wind farm....

  8. System Recovery in Large-Scale Distributed Storage Systems

    OpenAIRE

    Aga, Svein

    2008-01-01

    This report aims to describe and improve a system recovery process in large-scale storage systems. Inevitable, a recovery process results in the system being loaded with internal replication of data, and will extensively utilize several storage nodes. Such internal load can be categorized and generalized into a maintenance workload class. Obviously, a storage system will have external clients which also introduce load into the system. This can be users altering their data, uploading new cont...

  9. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y

    2011-01-01

    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  10. Large-Scale Physical Separation of Depleted Uranium from Soil

    Science.gov (United States)

    2012-09-01

    unweathered depleted uranium rods illustrating the formation of uranyl oxides and salts . Unfired penetrator rods can range from 10 to 50 cm in length...specific area ratio (as thin sections, fine particles, or molten states). Uranium in finely divided form is prone to ignition. Uranium also has an...ER D C/ EL T R -1 2 -2 5 Army Range Technology Program Large-Scale Physical Separation of Depleted Uranium from Soil E nv ir on m en ta l

  11. NASA: Assessments of Selected Large-Scale Projects

    Science.gov (United States)

    2011-03-01

    Selected Large-Scale Projects Common Name: Orion Project Update The President proposed cancellation of the Constellation Program, including the Orion ...fiscal year 2010. NASA remains poised to leverage Constellation assets to contribute to future exploration beyond low-Earth orbit. Orion Crew...Observatory 2 (OCO-2) 65 Orion Crew Exploration Vehicle 67 Radiation Belt Storm Probes (RBSP) 69 Soil Moisture Active and Passive (SMAP) 71

  12. Large scale 2D spectral compressed sensing in continuous domain

    KAUST Repository

    Cai, Jian-Feng

    2017-06-20

    We consider the problem of spectral compressed sensing in continuous domain, which aims to recover a 2-dimensional spectrally sparse signal from partially observed time samples. The signal is assumed to be a superposition of s complex sinusoids. We propose a semidefinite program for the 2D signal recovery problem. Our model is able to handle large scale 2D signals of size 500 × 500, whereas traditional approaches only handle signals of size around 20 × 20.

  13. How Large-Scale Research Facilities Connect to Global Research

    DEFF Research Database (Denmark)

    Lauto, Giancarlo; Valentin, Finn

    2013-01-01

    Policies for large-scale research facilities (LSRFs) often highlight their spillovers to industrial innovation and their contribution to the external connectivity of the regional innovation system hosting them. Arguably, the particular institutional features of LSRFs are conducive for collaborati...... with domestic universities or government laboratories. Policies conceiving LSRFs as “knowledge attractors” therefore should consider the complementarities between research at a LSRF and in its academic context at a regional or national level....

  14. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    Science.gov (United States)

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  15. Large-scale prediction of drug-target relationships

    DEFF Research Database (Denmark)

    Kuhn, Michael; Campillos, Mónica; González, Paula

    2008-01-01

    , but also provides a more global view on drug-target relations. Here we review recent attempts to apply large-scale computational analyses to predict novel interactions of drugs and targets from molecular and cellular features. In this context, we quantify the family-dependent probability of two proteins...... to bind the same ligand as function of their sequence similarity. We finally discuss how phenotypic data could help to expand our understanding of the complex mechanisms of drug action....

  16. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  17. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  18. Design study on sodium cooled large-scale reactor

    International Nuclear Information System (INIS)

    Murakami, Tsutomu; Hishida, Masahiko; Kisohara, Naoyuki

    2004-07-01

    In Phase 1 of the 'Feasibility Studies on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2, design improvement for further cost reduction of establishment of the plant concept has been performed. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2003, which is the third year of Phase 2. In the JFY2003 design study, critical subjects related to safety, structural integrity and thermal hydraulics which found in the last fiscal year has been examined and the plant concept has been modified. Furthermore, fundamental specifications of main systems and components have been set and economy has been evaluated. In addition, as the interim evaluation of the candidate concept of the FBR fuel cycle is to be conducted, cost effectiveness and achievability for the development goal were evaluated and the data of the three large-scale reactor candidate concepts were prepared. As a results of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000 yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  19. Primordial large-scale electromagnetic fields from gravitoelectromagnetic inflation

    International Nuclear Information System (INIS)

    Membiela, Federico Agustin; Bellini, Mauricio

    2009-01-01

    We investigate the origin and evolution of primordial electric and magnetic fields in the early universe, when the expansion is governed by a cosmological constant Λ 0 . Using the gravitoelectromagnetic inflationary formalism with A 0 =0, we obtain the power of spectrums for large-scale magnetic fields and the inflaton field fluctuations during inflation. A very important fact is that our formalism is naturally non-conformally invariant.

  20. Partitioning Large Scale Deep Belief Networks Using Dropout

    OpenAIRE

    Huang, Yanping; Zhang, Sai

    2015-01-01

    Deep learning methods have shown great promise in many practical applications, ranging from speech recognition, visual object recognition, to text processing. However, most of the current deep learning methods suffer from scalability problems for large-scale applications, forcing researchers or users to focus on small-scale problems with fewer parameters. In this paper, we consider a well-known machine learning model, deep belief networks (DBNs) that have yielded impressive classification per...

  1. Enhancing microelectronics education with large-scale student projects

    OpenAIRE

    Rumpf, Clemens; Lidtke, Aleksander; Weddell, Alex; Maunder, Rob

    2016-01-01

    This paper discusses the benefits of using large-scale projects, involving many groups of students with different backgrounds, in the education of undergraduate microelectronics engineering students. The benefits of involving students in large, industry-like projects are first briefly reviewed. The organisation of undergraduate programmes is presented, and it is described how students can be involved in such large projects, while maintaining compatibility with undergraduate programmes. The ge...

  2. Exploring the technical challenges of large-scale lifelogging

    OpenAIRE

    Gurrin, Cathal; Smeaton, Alan F.; Qiu, Zhengwei; Doherty, Aiden R.

    2013-01-01

    Ambiently and automatically maintaining a lifelog is an activity that may help individuals track their lifestyle, learning, health and productivity. In this paper we motivate and discuss the technical challenges of developing real-world lifelogging solutions, based on seven years of experience. The gathering, organisation, retrieval and presentation challenges of large-scale lifelogging are dis- cussed and we show how this can be achieved and the benefits that may accrue.

  3. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  4. Accuracy control in ultra-large-scale electronic structure calculation

    OpenAIRE

    Hoshi, Takeo

    2007-01-01

    Numerical aspects are investigated in ultra-large-scale electronic structure calculation. Accuracy control methods in process (molecular-dynamics) calculation are focused. Flexible control methods are proposed so as to control variational freedoms, automatically at each time step, within the framework of generalized Wannier state theory. The method is demonstrated in silicon cleavage simulation with 10^2-10^5 atoms. The idea is of general importance among process calculations and is also used...

  5. Large scale particle image velocimetry with helium filled soap bubbles

    Science.gov (United States)

    Bosbach, Johannes; Kühn, Matthias; Wagner, Claus

    2009-03-01

    The application of Particle Image Velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of Computational Fluid Dynamics simulations.

  6. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    Science.gov (United States)

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse

  7. EDITORIAL: Focus on Gravitational Lensing

    Science.gov (United States)

    Jain, Bhuvnesh

    2007-11-01

    relation of cluster light and mass. An interesting twist in cluster lensing was provided by the post-merger Bullet Cluster (identified as 1E0657-558). In this and other merging clusters, the lensing mass is displaced from the baryonic center of mass, presenting a challenge to theories that attempt to explain away dark matter by positing a modification to the law of gravity. Detailed modeling and multi-wavelength data on these systems will provide interesting limits on dark matter as well as the possibility of a major surprise. Other advances may come from the gravitational telescope effect of galaxy clusters: regions with very high magnification can be used to image proto-galaxies at z ~ 10. Statistical studies of galaxy and cluster lenses and of invisible, diffuse large-scale structures via weak lensing have come into their own in recent years. A census of the mass distribution at low redshift has been made using the technique of galaxy galaxy lensing: the mean mass profiles of galaxies and clusters have been measured using the weak tangential shear imprinted on background galaxies. These can be correlated with a variety of luminous tracers to study galaxy/cluster properties at a level of detail not possible until recently. Equally impressive is the measurement of excess mass correlations out to ~30 Mpc from these halos, requiring measurements of shear signals below 0.01%. These measurements account for the total matter density inferred from the CMB plus other observations, thus providing a direct measure of dark matter in the present day universe. Cosmic shear refers to the more challenging measurement of shear shear correlations without the use of foreground objects to orient the shear. The first detections of such correlations were published in 2001; since then measurements from arcminute to degree scales have been made with much improved accuracy. Theoretical techniques of lensing tomography and advances in analysis methods to eliminate systematic errors have

  8. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  9. Primordial quantum nonequilibrium and large-scale cosmic anomalies

    Science.gov (United States)

    Colin, Samuel; Valentini, Antony

    2015-08-01

    We study incomplete relaxation to quantum equilibrium at long wavelengths, during a preinflationary phase, as a possible explanation for the reported large-scale anomalies in the cosmic microwave background. Our scenario makes use of the de Broglie-Bohm pilot-wave formulation of quantum theory, in which the Born probability rule has a dynamical origin. The large-scale power deficit could arise from incomplete relaxation for the amplitudes of the primordial perturbations. We show, by numerical simulations for a spectator scalar field, that if the preinflationary era is radiation dominated then the deficit in the emerging power spectrum will have a characteristic shape (an inverse-tangent dependence on wave number k , with oscillations). It is found that our scenario is able to produce a power deficit in the observed region and of the observed (approximate) magnitude for an appropriate choice of cosmological parameters. We also discuss the large-scale anisotropy, which might arise from incomplete relaxation for the phases of the primordial perturbations. We present numerical simulations for phase relaxation, and we show how to define characteristic scales for amplitude and phase nonequilibrium. The extent to which the data might support our scenario is left as a question for future work. Our results suggest that we have a potentially viable model that might explain two apparently independent cosmic anomalies by means of a single mechanism.

  10. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  11. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  12. Learning Short Binary Codes for Large-scale Image Retrieval.

    Science.gov (United States)

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  13. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng

    2013-10-03

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  14. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  15. Robust regression for large-scale neuroimaging studies.

    Science.gov (United States)

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  17. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  18. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  19. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  20. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  1. Arcs from gravitational lensing

    Science.gov (United States)

    Grossman, Scott A.; Narayan, Ramesh

    1988-01-01

    The proposal made by Paczynski (1987) that the arcs of blue light found recently in two cluster cores are gravitationally lensed elongated images of background galaxies is investigated. It is shown that lenses that are circularly symmetric in projection produce pairs of arcs, in conflict with the observations. However, more realistic asymmetric lenses produce single arcs, which can become as elongated as the observed ones whenever the background galaxy is located on or close to a cusp caustic. Detailed computer simulations of lensing by clusters using a reasonable model of the mass distribution are presented. Elongated and curved lensed images longer than 10 arcsec occur in 12 percent of the simulated clusters. It is concluded that the lensing hypothesis must be taken seriously.

  2. Large-scale computation of incompressible viscous flow by least-squares finite element method

    Science.gov (United States)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  3. Cross-Spectral Signatures in Global Helioseismology Data of Large-Scale Flow in the Sun

    Science.gov (United States)

    Woodard, M. F.

    2005-05-01

    Large-scale flows in the Sun's interior have been studied using a variety of helioseismic techniques, including spectral analysis of spherical harmonic time series of photospheric velocity oscillations. Detailed maps of differential rotation have been obtained from measurements of the frequencies of resonance peaks in the power spectra. Flows can also affect power spectra in subtler ways, e.g., by their influence on the widths of resonance peaks. In addition to their spectral signature, flows and other aspherical perturbations also produce cross-spectral signatures, via the mode-coupling effect of a flow. Cross power spectra of time series of coefficients in the spherical-harmonic decomposition of SOHO/MDI medium-ℓ velocity images have been computed and are being compared with theoretical predictions. The results of a preliminary comparison of observed and theoretically predicted cross spectra for differential rotation and meridional circulation will be presented. A program to systematically map large-scale solar internal flow using cross-spectral data will be described. The author acknowledges useful discussions with colleagues, especially Doug Braun, Yuhong Fan, Aaron Birch, and Jesper Schou. He is also grateful to Jesper Schou for help in acquiring MDI data products and to NASA for support under contract NAS5-3114. The Solar Oscillations Investigation- Michelson Doppler Imager experiment on SOHO is supported by NASA contract NAG5-3077 at Stanford University. SOHO is a project of international cooperation between ESA and NASA.

  4. Experimental study of detonation of large-scale powder-droplet-vapor mixtures

    Science.gov (United States)

    Bai, C.-H.; Wang, Y.; Xue, K.; Wang, L.-F.

    2018-01-01

    Large-scale experiments were carried out to investigate the detonation performance of a 1600-m3 ternary cloud consisting of aluminum powder, fuel droplets, and vapor, which were dispersed by a central explosive in a cylindrically stratified configuration. High-frame-rate video cameras and pressure gauges were used to analyze the large-scale explosive dispersal of the mixture and the ensuing blast wave generated by the detonation of the cloud. Special attention was focused on the effect of the descending motion of the charge on the detonation performance of the dispersed ternary cloud. The charge was parachuted by an ensemble of apparatus from the designated height in order to achieve the required terminal velocity when the central explosive was detonated. A descending charge with a terminal velocity of 32 m/s produced a cloud with discernably increased concentration compared with that dispersed from a stationary charge, the detonation of which hence generates a significantly enhanced blast wave beyond the scaled distance of 6 m/kg^{1/3} . The results also show the influence of the descending motion of the charge on the jetting phenomenon and the distorted shock front.

  5. Turbulent boundary layer over 2D and 3D large-scale wavy walls

    Science.gov (United States)

    Chamorro, Leonardo P.; Hamed, Ali M.; Castillo, Luciano

    2015-11-01

    In this work, an experimental investigation of the developing and developed flow over two- and three-dimensional large-scale wavy walls was performed using high-resolution planar particle image velocimetry in a refractive-index-matching flume. The 2D wall is described by a sinusoidal wave in the streamwise direction with amplitude to wavelength ratio a/ λx = 0.05. The 3D wall is defined with an additional wave superimposed on the 2D wall in the spanwise direction with a/ λy = 0.1. The flow was characterized at Reynolds numbers of 4000 and 40000, based on the bulk velocity and the flume half height. Instantaneous velocity fields and time-averaged turbulence quantities reveal strong coupling between large-scale topography and the turbulence dynamics near the wall. Turbulence statistics show the presence of a well-structured shear layer that enhances the turbulence for the 2D wavy wall, whereas the 3D wall exhibits different flow dynamics and significantly lower turbulence levels, particularly for which shows about 30% reduction. The likelihood of recirculation bubbles, levels and spatial distribution of turbulence, and the rate of the turbulent kinetic energy production are shown to be severely affected when a single spanwise mode is superimposed on the 2D wall. POD analysis was also performed to further understand distinctive features of the flow structures due to surface topography.

  6. Modification of large-scale motions in a turbulent pipe flow

    Science.gov (United States)

    Senshu, Kohei; Shinozaki, Hiroaki; Sakakibara, Jun

    2017-11-01

    We performed experiments to modify the flow structures in a fully developed turbulent flow in a straight round pipe. The modification of the flow was achieved by installing a short coaxial inner pipe. The inner pipe has ability to add continuous suction or blowing disturbance through its outer surface. The experiments were conducted at a Reynolds number of 44,000 with seven different disturbance patterns. The wall static pressure was measured and pipe friction coefficient was evaluated. The velocity distribution was measured with PIV and very large scale motions (VLSMs) were visualized. Pipe friction coefficient was increased by installing the inner pipe, while turbulence intensities over the cross section were reduced. Slight change of the friction was observed if the disturbance was added. We decomposed fluctuating velocity field in the azimuthal direction by a Fourier series expansion. As a result, we obtained that contribution of lower azimuthal mode numbers (m = 2, 3, 4) reduced while the higher modes increased. This was consistent with the observation of visualized very large scale motions.

  7. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  8. Large scale obscuration and related climate effects open literature bibliography

    International Nuclear Information System (INIS)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ''Nuclear Winter Controversy'' in the early 1980's. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest

  9. Large-scale biophysical evaluation of protein PEGylation effects

    DEFF Research Database (Denmark)

    Vernet, Erik; Popa, Gina; Pozdnyakova, Irina

    2016-01-01

    PEGylation is the most widely used method to chemically modify protein biopharmaceuticals, but surprisingly limited public data is available on the biophysical effects of protein PEGylation. Here we report the first large-scale study, with site-specific mono-PEGylation of 15 different proteins...... and characterization of 61 entities in total using a common set of analytical methods. Predictions of molecular size were typically accurate in comparison with actual size determined by size-exclusion chromatography (SEC) or dynamic light scattering (DLS). In contrast, there was no universal trend regarding the effect...

  10. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin

    2013-01-01

    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  11. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob

    2013-01-01

    necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production...... optimization decision. A multi-objective PSO (MOPSO) algorithm was introduced to solve this optimization problem, which can accelerate the convergence and guarantee the diversity of Pareto-optimal front set as well. The feasibility and effectiveness of the proposed bi-level planning approach and the MOPSO...

  12. Less is more: regularization perspectives on large scale machine learning

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Deep learning based techniques provide a possible solution at the expanse of theoretical guidance and, especially, of computational requirements. It is then a key challenge for large scale machine learning to devise approaches guaranteed to be accurate and yet computationally efficient. In this talk, we will consider a regularization perspectives on machine learning appealing to classical ideas in linear algebra and inverse problems to scale-up dramatically nonparametric methods such as kernel methods, often dismissed because of prohibitive costs. Our analysis derives optimal theoretical guarantees while providing experimental results at par or out-performing state of the art approaches.

  13. UAV Data Processing for Large Scale Topographical Mapping

    Directory of Open Access Journals (Sweden)

    W. Tampubolon

    2014-06-01

    Full Text Available Large scale topographical mapping in the third world countries is really a prominent challenge in geospatial industries nowadays. On one side the demand is significantly increasing while on the other hand it is constrained by limited budgets available for mapping projects. Since the advent of Act Nr.4/yr.2011 about Geospatial Information in Indonesia, large scale topographical mapping has been on high priority for supporting the nationwide development e.g. detail spatial planning. Usually large scale topographical mapping relies on conventional aerial survey campaigns in order to provide high resolution 3D geospatial data sources. Widely growing on a leisure hobby, aero models in form of the so-called Unmanned Aerial Vehicle (UAV bring up alternative semi photogrammetric aerial data acquisition possibilities suitable for relatively small Area of Interest (AOI i.e. <5,000 hectares. For detail spatial planning purposes in Indonesia this area size can be used as a mapping unit since it usually concentrates on the basis of sub district area (kecamatan level. In this paper different camera and processing software systems will be further analyzed for identifying the best optimum UAV data acquisition campaign components in combination with the data processing scheme. The selected AOI is covering the cultural heritage of Borobudur Temple as one of the Seven Wonders of the World. A detailed accuracy assessment will be concentrated within the object feature of the temple at the first place. Feature compilation involving planimetric objects (2D and digital terrain models (3D will be integrated in order to provide Digital Elevation Models (DEM as the main interest of the topographic mapping activity. By doing this research, incorporating the optimum amount of GCPs in the UAV photo data processing will increase the accuracy along with its high resolution in 5 cm Ground Sampling Distance (GSD. Finally this result will be used as the benchmark for alternative

  14. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  15. Large scale obscuration and related climate effects open literature bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  16. Current status of large-scale cryogenic gravitational wave telescope

    International Nuclear Information System (INIS)

    Kuroda, K; Ohashi, M; Miyoki, S; Uchiyama, T; Ishitsuka, H; Yamamoto, K; Kasahara, K; Fujimoto, M-K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Nagano, S; Tsunesada, Y; Zhu, Zong-Hong; Shintomi, T; Yamamoto, A; Suzuki, T; Saito, Y; Haruyama, T; Sato, N; Higashi, Y; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Aso, Y; Ueda, K-I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Tagoshi, H; Nakamura, T; Sasaki, M; Tanaka, T; Oohara, K; Takahashi, H; Miyakawa, O; Tobar, M E

    2003-01-01

    The large-scale cryogenic gravitational wave telescope (LCGT) project is the proposed advancement of TAMA, which will be able to detect the coalescences of binary neutron stars occurring in our galaxy. LCGT intends to detect the coalescence events within about 240 Mpc, the rate of which is expected to be from 0.1 to several events in a year. LCGT has Fabry-Perot cavities of 3 km baseline and the mirrors are cooled down to a cryogenic temperature of 20 K. It is planned to be built in the underground of Kamioka mine. This paper overviews the revision of the design and the current status of the R and D

  17. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  18. Optimization of large scale food production using Lean Manufacturing principles

    DEFF Research Database (Denmark)

    Engelund, Eva Høy; Friis, Alan; Breum, Gitte

    2009-01-01

    This paper discusses how the production principles of Lean Manufacturing (Lean) can be applied in a large-scale meal production. Lean principles are briefly presented, followed by a field study of how a kitchen at a Danish hospital has implemented Lean in the daily production. In the kitchen...... not be negatively affected by the rationalisation of production procedures. The field study shows that Lean principles can be applied in meal production and can result in increased production efficiency and systematic improvement of product quality without negative effects on the working environment. The results...

  19. Inflation in de Sitter spacetime and CMB large scale anomaly

    Science.gov (United States)

    Zhao, Dong; Li, Ming-Hua; Wang, Ping; Chang, Zhe

    2015-09-01

    The influence of cosmological constant-type dark energy in the early universe is investigated. This is accommodated by a new dispersion relation in de Sitter spacetime. We perform a global fit to explore the cosmological parameter space by using the CosmoMC package with the recently released Planck TT and WMAP polarization datasets. Using the results from the global fit, we compute a new CMB temperature-temperature (TT) spectrum. The obtained TT spectrum has lower power compared with that based on the ACDM model at large scales. Supported by National Natural Science Foundation of China (11375203)

  20. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  1. Large scale PV plants - also in Denmark. Project report

    Energy Technology Data Exchange (ETDEWEB)

    Ahm, P. (PA Energy, Malling (Denmark)); Vedde, J. (SiCon. Silicon and PV consulting, Birkeroed (Denmark))

    2011-04-15

    Large scale PV (LPV) plants, plants with a capacity of more than 200 kW, has since 2007 constituted an increasing share of the global PV installations. In 2009 large scale PV plants with cumulative power more that 1,3 GWp were connected to the grid. The necessary design data for LPV plants in Denmark are available or can be found, although irradiance data could be improved. There seems to be very few institutional barriers for LPV projects, but as so far no real LPV projects have been processed, these findings have to be regarded as preliminary. The fast growing number of very large scale solar thermal plants for district heating applications supports these findings. It has further been investigated, how to optimize the lay-out of LPV plants. Under the Danish irradiance conditions with several winter months with very low solar height PV installations on flat surfaces will have to balance the requirements of physical space - and cost, and the loss of electricity production due to shadowing effects. The potential for LPV plants in Denmark are found in three main categories: PV installations on flat roof of large commercial buildings, PV installations on other large scale infrastructure such as noise barriers and ground mounted PV installations. The technical potential for all three categories is found to be significant and in the range of 50 - 250 km2. In terms of energy harvest PV plants will under Danish conditions exhibit an overall efficiency of about 10 % in conversion of the energy content of the light compared to about 0,3 % for biomass. The theoretical ground area needed to produce the present annual electricity consumption of Denmark at 33-35 TWh is about 300 km2 The Danish grid codes and the electricity safety regulations mention very little about PV and nothing about LPV plants. It is expected that LPV plants will be treated similarly to big wind turbines. A number of LPV plant scenarios have been investigated in detail based on real commercial offers and

  2. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  3. Learning a Large Scale of Ontology from Japanese Wikipedia

    Science.gov (United States)

    Tamagawa, Susumu; Sakurai, Shinya; Tejima, Takuya; Morita, Takeshi; Izumi, Noriaki; Yamaguchi, Takahira

    Here is discussed how to learn a large scale of ontology from Japanese Wikipedia. The learned ontology includes the following properties: rdfs:subClassOf (IS-A relationship), rdf:type (class-instance relationship), owl:Object/DatatypeProperty (Infobox triple), rdfs:domain (property domain), and skos:altLabel (synonym). Experimental case studies show us that the learned Japanese Wikipedia Ontology goes better than already existing general linguistic ontologies, such as EDR and Japanese WordNet, from the points of building costs and structure information richness.

  4. Large-scale structure in the universe: Theory vs observations

    International Nuclear Information System (INIS)

    Kashlinsky, A.; Jones, B.J.T.

    1990-01-01

    A variety of observations constrain models of the origin of large scale cosmic structures. We review here the elements of current theories and comment in detail on which of the current observational data provide the principal constraints. We point out that enough observational data have accumulated to constrain (and perhaps determine) the power spectrum of primordial density fluctuations over a very large range of scales. We discuss the theories in the light of observational data and focus on the potential of future observations in providing even (and ever) tighter constraints. (orig.)

  5. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  6. Computational Approach to large Scale Process Optimization through Pinch Analysis

    Directory of Open Access Journals (Sweden)

    Nasser Al-Azri

    2015-08-01

    Full Text Available Since its debut in the last quarter of the twentieth century, pinch technology has become an efficient tool for efficient and cost-effective engineering process design. This method allows the integration of mass and heat streams in such a way that minimizes waste and external purchase of mass and utilities. Moreover, integrating process streams internally will minimize fuel consumption and hence carbon emission to the atmosphere. This paper discusses a programmable approach to the design of mass and heat exchange networks that can be used easily for large scale engineering processes.

  7. Infrastructure and interfaces for large-scale numerical software.

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, L.; Gropp, W. D.; Hovland, P. D.; McInnes, L. C.; Smith, B. F.

    1999-06-10

    The complexity of large-scale scientific simulations often necessitates the combined use of multiple software packages developed by different groups in areas such as adaptive mesh manipulations, scalable algebraic solvers, and optimization. Historically, these packages have been combined by using custom code. This practice inhibits experimentation with and comparison of multiple tools that provide similar functionality through different implementations. The ALICE project, a collaborative effort among researchers at Argonne National Laboratory, is exploring the use of component-based software engineering to provide better interoperability among numerical toolkits. They discuss some initial experiences in developing an infrastructure and interfaces for high-performance numerical computing.

  8. A Modeling & Simulation Implementation Framework for Large-Scale Simulation

    Directory of Open Access Journals (Sweden)

    Song Xiao

    2012-10-01

    Full Text Available Classical High Level Architecture (HLA systems are facing development problems for lack of supporting fine-grained component integration and interoperation in large-scale complex simulation applications. To provide efficient methods of this issue, an extensible, reusable and composable simulation framework is proposed. To promote the reusability from coarse-grained federate to fine-grained components, this paper proposes a modelling & simulation framework which consists of component-based architecture, modelling methods, and simulation services to support and simplify the process of complex simulation application construction. Moreover, a standard process and simulation tools are developed to ensure the rapid and effective development of simulation application.

  9. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  10. Large-Scale Experiments in a Sandy Aquifer in Denmark

    DEFF Research Database (Denmark)

    Jensen, Karsten Høgh; Bitsch, Karen Bue; Bjerg, Poul Løgstrup

    1993-01-01

    A large-scale natural gradient dispersion experiment was carried out in a sandy aquifer in the western part of Denmark using tritium and chloride as tracers. For both plumes a marked spreading was observed in the longitudinal direction while the spreading in the transverse horizontal and transverse...... vertical directions was very small. The horizontal transport parameters of the advection-dispersion equation were investigated by applying an optimization model to observed breakthrough curves of tritium representing depth averaged concentrations. No clear trend in dispersion parameters with travel...

  11. Large-scale sodium spray fire code validation (SOFICOV) test

    International Nuclear Information System (INIS)

    Jeppson, D.W.; Muhlestein, L.D.

    1985-01-01

    A large-scale, sodium, spray fire code validation test was performed in the HEDL 850-m 3 Containment System Test Facility (CSTF) as part of the Sodium Spray Fire Code Validation (SOFICOV) program. Six hundred fifty eight kilograms of sodium spray was sprayed in an air atmosphere for a period of 2400 s. The sodium spray droplet sizes and spray pattern distribution were estimated. The containment atmosphere temperature and pressure response, containment wall temperature response and sodium reaction rate with oxygen were measured. These results are compared to post-test predictions using SPRAY and NACOM computer codes

  12. Reliability Calculation of Large-scale Complex Initiation Network

    Science.gov (United States)

    Li, Xinjian; Yang, Jun; Yan, Bingqiang; Zheng, Xiao

    2018-02-01

    A method was proposed to calculate the reliability of bundle-series compound initiation network which was the widely used for large-scale demolition blasting in China. The network was defined reliable only when all the 2nd level Nonel detonator joints outside the blasting holes were initiated. Based on the definition a series of equations were inferred to calculate the reliability of the complex initiation network. A program is written by Matlab to solve the equations. The method showed good performance with much less computations compared to the traditional ones.

  13. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred

    2010-08-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.

  14. Probing dark energy with lensing magnification in photometric surveys.

    Science.gov (United States)

    Schneider, Michael D

    2014-02-14

    I present an estimator for the angular cross correlation of two tracers of the cosmological large-scale structure that utilizes redshift information to isolate separate physical contributions. The estimator is derived by solving the Limber equation for a reweighting of the foreground tracer that nulls either clustering or lensing contributions to the cross correlation function. Applied to future photometric surveys, the estimator can enhance the measurement of gravitational lensing magnification effects to provide a competitive independent constraint on the dark energy equation of state.

  15. Dynamical links between small- and large-scale mantle heterogeneity: Seismological evidence

    Science.gov (United States)

    Frost, Daniel A.; Garnero, Edward J.; Rost, Sebastian

    2018-01-01

    We identify PKP • PKP scattered waves (also known as P‧ •P‧) from earthquakes recorded at small-aperture seismic arrays at distances less than 65°. P‧ •P‧ energy travels as a PKP wave through the core, up into the mantle, then scatters back down through the core to the receiver as a second PKP. P‧ •P‧ waves are unique in that they allow scattering heterogeneities throughout the mantle to be imaged. We use array-processing methods to amplify low amplitude, coherent scattered energy signals and resolve their incoming direction. We deterministically map scattering heterogeneity locations from the core-mantle boundary to the surface. We use an extensive dataset with sensitivity to a large volume of the mantle and a location method allowing us to resolve and map more heterogeneities than have previously been possible, representing a significant increase in our understanding of small-scale structure within the mantle. Our results demonstrate that the distribution of scattering heterogeneities varies both radially and laterally. Scattering is most abundant in the uppermost and lowermost mantle, and a minimum in the mid-mantle, resembling the radial distribution of tomographically derived whole-mantle velocity heterogeneity. We investigate the spatial correlation of scattering heterogeneities with large-scale tomographic velocities, lateral velocity gradients, the locations of deep-seated hotspots and subducted slabs. In the lowermost 1500 km of the mantle, small-scale heterogeneities correlate with regions of low seismic velocity, high lateral seismic gradient, and proximity to hotspots. In the upper 1000 km of the mantle there is no significant correlation between scattering heterogeneity location and subducted slabs. Between 600 and 900 km depth, scattering heterogeneities are more common in the regions most remote from slabs, and close to hotspots. Scattering heterogeneities show an affinity for regions close to slabs within the upper 200 km of the

  16. Learning through Different Lenses

    Science.gov (United States)

    Jeweler, Sue; Barnes-Robinson, Linda

    2015-01-01

    When parents and teachers help gifted kids use the metaphor "learning through different lenses," amazing things happen: Horizons open up. Ideas are focused. Thoughts are magnified and clarified. They see the big picture. Metaphoric thinking offers new and exciting ways to see the world. Viewing the world through different lenses provides…

  17. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    Science.gov (United States)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  18. ANTITRUST ISSUES IN THE LARGE-SCALE FOOD DISTRIBUTION SECTOR

    Directory of Open Access Journals (Sweden)

    Enrico Adriano Raffaelli

    2014-12-01

    Full Text Available In light of the slow modernization of the Italian large-scale food distribution sector, of the fragmentation at national level, of the significant roles of the cooperatives at local level and of the alliances between food retail chains, the ICA during the recent years has developed a strong interest in this sector.After having analyzed the peculiarities of the Italian large-scale food distribution sector, this article shows the recent approach taken by the ICA toward the main antitrust issues in this sector.In the analysis of such issues, mainly the contractual relations between the GDO retailers and their suppliers, the introduction of Article 62 of Law no. 27 dated 24th March 2012 is crucial, because, by facilitating and encouraging complaints by the interested parties, it should allow the developing of normal competitive dynamics within the food distribution sector, where companies should be free to enter the market using the tools at their disposal, without undue restrictions.

  19. The combustion behavior of large scale lithium titanate battery

    Science.gov (United States)

    Huang, Peifeng; Wang, Qingsong; Li, Ke; Ping, Ping; Sun, Jinhua

    2015-01-01

    Safety problem is always a big obstacle for lithium battery marching to large scale application. However, the knowledge on the battery combustion behavior is limited. To investigate the combustion behavior of large scale lithium battery, three 50 Ah Li(NixCoyMnz)O2/Li4Ti5O12 batteries under different state of charge (SOC) were heated to fire. The flame size variation is depicted to analyze the combustion behavior directly. The mass loss rate, temperature and heat release rate are used to analyze the combustion behavior in reaction way deeply. Based on the phenomenon, the combustion process is divided into three basic stages, even more complicated at higher SOC with sudden smoke flow ejected. The reason is that a phase change occurs in Li(NixCoyMnz)O2 material from layer structure to spinel structure. The critical temperatures of ignition are at 112–121°C on anode tab and 139 to 147°C on upper surface for all cells. But the heating time and combustion time become shorter with the ascending of SOC. The results indicate that the battery fire hazard increases with the SOC. It is analyzed that the internal short and the Li+ distribution are the main causes that lead to the difference. PMID:25586064

  20. Practical considerations for large-scale gut microbiome studies.

    Science.gov (United States)

    Vandeputte, Doris; Tito, Raul Y; Vanleeuwen, Rianne; Falony, Gwen; Raes, Jeroen

    2017-08-01

    First insights on the human gut microbiome have been gained from medium-sized, cross-sectional studies. However, given the modest portion of explained variance of currently identified covariates and the small effect size of gut microbiota modulation strategies, upscaling seems essential for further discovery and characterisation of the multiple influencing factors and their relative contribution. In order to guide future research projects and standardisation efforts, we here review currently applied collection and preservation methods for gut microbiome research. We discuss aspects such as sample quality, applicable omics techniques, user experience and time and cost efficiency. In addition, we evaluate the protocols of a large-scale microbiome cohort initiative, the Flemish Gut Flora Project, to give an idea of perspectives, and pitfalls of large-scale faecal sampling studies. Although cryopreservation can be regarded as the gold standard, freezing protocols generally require more resources due to cold chain management. However, here we show that much can be gained from an optimised transport chain and sample aliquoting before freezing. Other protocols can be useful as long as they preserve the microbial signature of a sample such that relevant conclusions can be drawn regarding the research question, and the obtained data are stable and reproducible over time. © FEMS 2017.

  1. Large-scale stabilization control of input-constrained quadrotor

    Directory of Open Access Journals (Sweden)

    Jun Jiang

    2016-10-01

    Full Text Available The quadrotor has been the most popular aircraft in the last decade due to its excellent dynamics and continues to attract ever-increasing research interest. Delivering a quadrotor from a large fixed-wing aircraft is a promising application of quadrotors. In such an application, the quadrotor needs to switch from a highly unstable status, featured as large initial states, to a safe and stable flight status. This is the so-called large-scale stability control problem. In such an extreme scenario, the quadrotor is at risk of actuator saturation. This can cause the controller to update incorrectly and lead the quadrotor to spiral and crash. In this article, to safely control the quadrotor in such scenarios, the control input constraint is analyzed. The key states of a quadrotor dynamic model are selected, and a two-dimensional dynamic model is extracted based on a symmetrical body configuration. A generalized point-wise min-norm nonlinear control method is proposed based on the Lyapunov function, and large-scale stability control is hence achieved. An enhanced point-wise, min-norm control is further provided to improve the attitude control performance, with altitude performance degenerating slightly. Simulation results showed that the proposed control methods can stabilize the input-constrained quadrotor and the enhanced method can improve the performance of the quadrotor in critical states.

  2. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  3. Development of large-scale functional brain networks in children.

    Directory of Open Access Journals (Sweden)

    Kaustubh Supekar

    2009-07-01

    Full Text Available The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y and 22 young-adults (ages 19-22 y. Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.

  4. IP over optical multicasting for large-scale video delivery

    Science.gov (United States)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  5. Remote Sensing Image Classification With Large-Scale Gaussian Processes

    Science.gov (United States)

    Morales-Alvarez, Pablo; Perez-Suay, Adrian; Molina, Rafael; Camps-Valls, Gustau

    2018-02-01

    Current remote sensing image classification problems have to deal with an unprecedented amount of heterogeneous and complex data sources. Upcoming missions will soon provide large data streams that will make land cover/use classification difficult. Machine learning classifiers can help at this, and many methods are currently available. A popular kernel classifier is the Gaussian process classifier (GPC), since it approaches the classification problem with a solid probabilistic treatment, thus yielding confidence intervals for the predictions as well as very competitive results to state-of-the-art neural networks and support vector machines. However, its computational cost is prohibitive for large scale applications, and constitutes the main obstacle precluding wide adoption. This paper tackles this problem by introducing two novel efficient methodologies for Gaussian Process (GP) classification. We first include the standard random Fourier features approximation into GPC, which largely decreases its computational cost and permits large scale remote sensing image classification. In addition, we propose a model which avoids randomly sampling a number of Fourier frequencies, and alternatively learns the optimal ones within a variational Bayes approach. The performance of the proposed methods is illustrated in complex problems of cloud detection from multispectral imagery and infrared sounding data. Excellent empirical results support the proposal in both computational cost and accuracy.

  6. Quantitative approach to the topology of large-scale structure

    International Nuclear Information System (INIS)

    Gott, J.R. III; Weinberg, D.H.; Melott, A.L.; Kansas Univ., Lawrence)

    1987-01-01

    A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral bubbles. The topology of the evolved mass distribution and biased galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model. 22 references

  7. Some ecological guidelines for large-scale biomass plantations

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, W.; Cook, J.H.; Beyea, J. [National Audubon Society, Tavernier, FL (United States)

    1993-12-31

    The National Audubon Society sees biomass as an appropriate and necessary source of energy to help replace fossil fuels in the near future, but is concerned that large-scale biomass plantations could displace significant natural vegetation and wildlife habitat, and reduce national and global biodiversity. We support the development of an industry large enough to provide significant portions of our energy budget, but we see a critical need to ensure that plantations are designed and sited in ways that minimize ecological disruption, or even provide environmental benefits. We have been studying the habitat value of intensively managed short-rotation tree plantations. Our results show that these plantations support large populations of some birds, but not all of the species using the surrounding landscape, and indicate that their value as habitat can be increased greatly by including small areas of mature trees within them. We believe short-rotation plantations can benefit regional biodiversity if they can be deployed as buffers for natural forests, or as corridors connecting forest tracts. To realize these benefits, and to avoid habitat degradation, regional biomass plantation complexes (e.g., the plantations supplying all the fuel for a powerplant) need to be planned, sited, and developed as large-scale units in the context of the regional landscape mosaic.

  8. Large scale molecular dynamics simulations of nuclear pasta

    Science.gov (United States)

    Horowitz, C. J.; Berry, D.; Briggs, C.; Chapman, M.; Clark, E.; Schneider, A.

    2014-09-01

    We report large-scale molecular dynamics simulations of nuclear pasta using from 50,000 to more than 3,000,000 nucleons. We use a simple phenomenological two-nucleon potential that reproduces nuclear saturation. We find a complex ``nuclear waffle'' phase in addition to more conventional rod, plate, and sphere phases. We also find long-lived topological defects involving screw like dislocations that may reduce the electrical conductivity and thermal conductivity of lasagna phases. From MD trajectories we calculate a variety of quantities including static structure factor, dynamical response function, shear modulus and breaking strain. We report large-scale molecular dynamics simulations of nuclear pasta using from 50,000 to more than 3,000,000 nucleons. We use a simple phenomenological two-nucleon potential that reproduces nuclear saturation. We find a complex ``nuclear waffle'' phase in addition to more conventional rod, plate, and sphere phases. We also find long-lived topological defects involving screw like dislocations that may reduce the electrical conductivity and thermal conductivity of lasagna phases. From MD trajectories we calculate a variety of quantities including static structure factor, dynamical response function, shear modulus and breaking strain. Supported in parts by DOE Grants No. DE-FG02-87ER40365 (Indiana University) and No. DE-SC0008808 (NUCLEI SciDAC Collaboration).

  9. State-of-the-art of large scale biogas plants

    International Nuclear Information System (INIS)

    Prisum, J.M.; Noergaard, P.

    1992-01-01

    A survey of the technological state of large scale biogas plants in Europe treating manure is given. 83 plants are in operation at present. Of these, 16 are centralised digestion plants. Transport costs at centralised digestion plants amounts to between 25 and 40 percent of the total operational costs. Various transport equipment is used. Most large scale digesters are CSTRs, but serial, contact, 2-step, and plug-flow digesters are also found. Construction materials are mostly steel and concrete. Mesophilic digestion is most common (56%), thermophilic digestion is used in 17% of the plants, combined mesophilic and thermophilic digestion is used in 28% of the centralised plants. Mixing of digester content is performed with gas injection, propellers, and gas-liquid displacement. Heating is carried out using external or internal heat exchangers. Heat recovery is only used in Denmark. Gas purification equipment is commonplace, but not often needed. Several plants use separation of the digested manure, often as part of a post-treatment/-purification process or for the production of 'compost'. Screens, sieve belt separaters, centrifuges and filter presses are employed. The use of biogas varies considerably. In some cases, combined heat and power stations are supplying the grid and district heating systems. Other plants use only either the electricity or heat. (au)

  10. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    Second generation biorefineries transform agricultural wastes into biochemicals with higher added value, e.g. bioethanol, which is thought to become a primary component in liquid fuels [1]. Extensive endeavors have been conducted to make the production process feasible on a large scale, and recen......Second generation biorefineries transform agricultural wastes into biochemicals with higher added value, e.g. bioethanol, which is thought to become a primary component in liquid fuels [1]. Extensive endeavors have been conducted to make the production process feasible on a large scale......-time monitoring. The Inbicon biorefinery converts wheat straw into bioethanol utilizing steam, enzymes, and genetically modified yeast. The biomass is first pretreated in a steam pressurized and continuous thermal reactor where lignin is relocated, and hemicellulose partially hydrolyzed such that cellulose...... becomes more accessible to enzymes. The biorefinery is integrated with a nearby power plant following the Integrated Biomass Utilization System (IBUS) principle for reducing steam costs [4]. During the pretreatment, by-products are also created such as organic acids, furfural, and pseudo-lignin, which act...

  11. Survey of large-scale isotope applications: nuclear technology field

    Energy Technology Data Exchange (ETDEWEB)

    Dewitt, R.

    1977-01-21

    A preliminary literature survey of potential large-scale isotope applications was made according to topical fields; i.e., nuclear, biological, medical, environmental, agricultural, geological, and industrial. Other than the possible expansion of established large-scale isotope applications such as uranium, boron, lithium, and hydrogen, no new immediate isotope usage appears to be developing. Over the long term a change in emphasis for isotope applications was identified which appears to be more responsive to societal concerns for health, the environment, and the conservation of materials and energy. For gram-scale applications, a variety of isotopes may be required for use as nonradioactive ''activable'' tracers. A more detailed survey of the nuclear field identified a potential need for large amounts (tons) of special isotopic materials for advanced reactor components and structures. At this need for special materials and the development of efficient separation methods progresses, the utilization of isotopes from nuclear wastes for beneficial uses should also progress.

  12. Large-scale impact cratering on the terrestrial planets

    International Nuclear Information System (INIS)

    Grieve, R.A.F.

    1982-01-01

    The crater densities on the earth and moon form the basis for a standard flux-time curve that can be used in dating unsampled planetary surfaces and constraining the temporal history of endogenic geologic processes. Abundant evidence is seen not only that impact cratering was an important surface process in planetary history but also that large imapact events produced effects that were crucial in scale. By way of example, it is noted that the formation of multiring basins on the early moon was as important in defining the planetary tectonic framework as plate tectonics is on the earth. Evidence from several planets suggests that the effects of very-large-scale impacts go beyond the simple formation of an impact structure and serve to localize increased endogenic activity over an extended period of geologic time. Even though no longer occurring with the frequency and magnitude of early solar system history, it is noted that large scale impact events continue to affect the local geology of the planets. 92 references

  13. Large Scale Community Detection Using a Small World Model

    Directory of Open Access Journals (Sweden)

    Ranjan Kumar Behera

    2017-11-01

    Full Text Available In a social network, small or large communities within the network play a major role in deciding the functionalities of the network. Despite of diverse definitions, communities in the network may be defined as the group of nodes that are more densely connected as compared to nodes outside the group. Revealing such hidden communities is one of the challenging research problems. A real world social network follows small world phenomena, which indicates that any two social entities can be reachable in a small number of steps. In this paper, nodes are mapped into communities based on the random walk in the network. However, uncovering communities in large-scale networks is a challenging task due to its unprecedented growth in the size of social networks. A good number of community detection algorithms based on random walk exist in literature. In addition, when large-scale social networks are being considered, these algorithms are observed to take considerably longer time. In this work, with an objective to improve the efficiency of algorithms, parallel programming framework like Map-Reduce has been considered for uncovering the hidden communities in social network. The proposed approach has been compared with some standard existing community detection algorithms for both synthetic and real-world datasets in order to examine its performance, and it is observed that the proposed algorithm is more efficient than the existing ones.

  14. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    Science.gov (United States)

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-01-01

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  15. Network placement optimization for large-scale distributed system

    Science.gov (United States)

    Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng

    2018-01-01

    The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.

  16. DEMNUni: massive neutrinos and the bispectrum of large scale structures

    Science.gov (United States)

    Ruggeri, Rossana; Castorina, Emanuele; Carbone, Carmelita; Sefusatti, Emiliano

    2018-03-01

    The main effect of massive neutrinos on the large-scale structure consists in a few percent suppression of matter perturbations on all scales below their free-streaming scale. Such effect is of particular importance as it allows to constraint the value of the sum of neutrino masses from measurements of the galaxy power spectrum. In this work, we present the first measurements of the next higher-order correlation function, the bispectrum, from N-body simulations that include massive neutrinos as particles. This is the simplest statistics characterising the non-Gaussian properties of the matter and dark matter halos distributions. We investigate, in the first place, the suppression due to massive neutrinos on the matter bispectrum, comparing our measurements with the simplest perturbation theory predictions, finding the approximation of neutrinos contributing at quadratic order in perturbation theory to provide a good fit to the measurements in the simulations. On the other hand, as expected, a linear approximation for neutrino perturbations would lead to Script O(fν) errors on the total matter bispectrum at large scales. We then attempt an extension of previous results on the universality of linear halo bias in neutrino cosmologies, to non-linear and non-local corrections finding consistent results with the power spectrum analysis.

  17. The effective field theory of cosmological large scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco, John Joseph M. [Stanford Univ., Stanford, CA (United States); Hertzberg, Mark P. [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Senatore, Leonardo [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  18. Glass badge dosimetry system for large scale personal monitoring

    International Nuclear Information System (INIS)

    Norimichi Juto

    2002-01-01

    Glass Badge using silver activated phosphate glass dosemeter was specially developed for large scale personal monitoring. And dosimetry systems such as an automatic leader and a dose equipment calculation algorithm were developed at once to achieve reasonable personal monitoring. In large scale personal monitoring, both of precision for dosimetry and confidence for lot of personal data handling become very important. The silver activated phosphate glass dosemeter has basically excellent characteristics for dosimetry such as homogeneous and stable sensitivity, negligible fading and so on. Glass Badge was designed to measure 10 keV - 10 MeV range of photon. 300 keV - 3 MeV range of beta, and 0.025 eV - 15 MeV range of neutron by included SSNTD. And developed Glass Badge dosimetry system has not only these basic characteristics but also lot of features to keep good precision for dosimetry and data handling. In this presentation, features of Glass Badge dosimetry systems and examples for practical personal monitoring systems will be presented. (Author)

  19. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  20. Large Scale Organization of a Near Wall Turbulent Boundary Layer

    Science.gov (United States)

    Stanislas, Michel; Dekou Tiomajou, Raoul Florent; Foucaut, Jean Marc

    2016-11-01

    This study lies in the context of large scale coherent structures investigation in a near wall turbulent boundary layer. An experimental database at high Reynolds numbers (Re θ = 9830 and Re θ = 19660) was obtained in the LML wind tunnel with stereo-PIV at 4 Hz and hot wire anemometry at 30 kHz. A Linear Stochastic Estimation procedure, is used to reconstruct a 3 component field resolved in space and time. Algorithms were developed to extract coherent structures from the reconstructed field. A sample of 3D view of the structures is depicted in Figure 1. Uniform momentum regions are characterized with their mean hydraulic diameter in the YZ plane, their life time and their contribution to Reynolds stresses. The vortical motions are characterized by their position, radius, circulation and vorticity in addition to their life time and their number computed at a fixed position from the wall. The spatial organization of the structures was investigated through a correlation of their respective indicative functions in the spanwise direction. The simplified large scale model that arise is compared to the ones available in the literature. Streamwise low (green) and high (yellow) uniform momentum regions with positive (red) and negative (blue) vortical motions. This work was supported by Campus International pour la Sécurité et l'Intermodalité des Transports.

  1. Study of carbon-based superconductor using large scale simulation

    International Nuclear Information System (INIS)

    Nakamura, Satoshi; Tejima, Syogo; Iizuka, Mikio; Nakamura, Hisashi

    2007-01-01

    Tachiki et al. theoretically proposed the vibronic mechanism of high-Tc superconductivity, which was based on the attractive electron-electron interaction originated from strong charge fluctuation by lattice vibration. This theory successfully explained the experimental results by neutron diffraction and angular resolved photoelectron spectroscopy. On the basis of the theory by Tachiki et. al., a theoretical study from a microscopic point view and large scale simulation was performed for B-doped diamond superconductivity. Three computer codes were developed for the simulation; 1. PVCRTMD (Parallel Vector Carbon Recursion Technique Molecular Dynamics) for the simulation of molecular dynamics calculation for strong coupling. 2. LSDRF (Large Scale Dielectric Response Function) for the analysis of effective interaction between electrons. 3. DEES (Dyson-Eliashberg Equation Solver) for the analysis of transition temperatures of superconductivity. The results of the simulation proved that the superconductivity of the B-doped diamond was caused by attractive interaction between electrons originated from strong electron-lattice interaction. (Y.K.)

  2. Kinematic morphology of large-scale structure: evolution from potential to rotational flow

    International Nuclear Information System (INIS)

    Wang, Xin; Szalay, Alex; Aragón-Calvo, Miguel A.; Neyrinck, Mark C.; Eyink, Gregory L.

    2014-01-01

    As an alternative way to describe the cosmological velocity field, we discuss the evolution of rotational invariants constructed from the velocity gradient tensor. Compared with the traditional divergence-vorticity decomposition, these invariants, defined as coefficients of the characteristic equation of the velocity gradient tensor, enable a complete classification of all possible flow patterns in the dark-matter comoving frame, including both potential and vortical flows. We show that this tool, first introduced in turbulence two decades ago, is very useful for understanding the evolution of the cosmic web structure, and in classifying its morphology. Before shell crossing, different categories of potential flow are highly associated with the cosmic web structure because of the coherent evolution of density and velocity. This correspondence is even preserved at some level when vorticity is generated after shell crossing. The evolution from the potential to vortical flow can be traced continuously by these invariants. With the help of this tool, we show that the vorticity is generated in a particular way that is highly correlated with the large-scale structure. This includes a distinct spatial distribution and different types of alignment between the cosmic web and vorticity direction for various vortical flows. Incorporating shell crossing into closed dynamical systems is highly non-trivial, but we propose a possible statistical explanation for some of the phenomena relating to the internal structure of the three-dimensional invariant space.

  3. Large eddy simulation of very-large-scale motions in the neutrally stratified atmospheric boundary layer

    Science.gov (United States)

    Fang, Jiannong; Porté-Agel, Fernando

    2014-05-01

    Large eddy simulation was used to investigate the very-large-scale motions (VLSM) in the neutrally stratified atmospheric boundary layer at a very high friction Reynolds number. The vertical height of the computational domain is Lz = 1000 m, which corresponds to the thickness of the boundary layer. The horizontal dimensions of the simulation domain are chosen to be Lx = 32Lz and Ly = 4Lz respectively, in order to contain a sufficient number of large-scale structures. The spatially coherent structures associated with VLSM are characterized through flow visualization and statistical analysis. The instantaneous velocity fields in streamwise/spanwise planes give evidence of streamwise-elongated zones of low speed fluid with negative streamwise velocity fluctuation, which is flanked on either side by similarly elongated high speed ones. The pre-multiplied power spectra and two-point correlations indicate that the scales of these streak-like structures are very large, up to 20Lz in the streamwise direction and Lz in the spanwise direction. These features are similar to what have been found in the logarithmic region of laboratory-scale boundary layers by direct numerical simulations and experiments conducted at low to moderate Reynolds numbers. The three dimensional correlation map and conditional average of the three components of velocity further indicate that the low-speed and high-speed regions possess the same elongated ellipsoid-like structure, which is inclined upward along the streamwise direction, and they are accompanied by counter-rotating roll modes in the cross section perpendicular to the streamwise direction. These findings are in agreement with recent observations made from field campaigns in the atmospheric boundary layer.

  4. Red Geyser: A New Class of Galaxy with Large-scale AGN-driven Winds

    Science.gov (United States)

    Roy, Namrata; Bundy, Kevin; Cheung, Edmond; MaNGA Team

    2018-01-01

    A new class of quiescent (non-star-forming) galaxies harboring possible AGN-driven winds have been discovered using the spatially resolved optical spectroscopy from the ongoing SDSS-IV MaNGA (Sloan Digital Sky Survey-IV Mapping Nearby Galaxies at Apache Point Observatory) survey. These galaxies named "red geysers" constitute 5%-10% of the local quiescent galaxy population and are characterized by narrow bisymmetric ionized gas emission patterns. These enhanced patterns are seen in equivalent width maps of Hα, [OIII] and other strong emission lines. They are co-aligned with the ionized gas velocity gradients but significantly misaligned with stellar velocity gradients. They also show very high gas velocity dispersions (~200 km/s). Considering these observations in light of models of the gravitational potential, Cheung et al. argued that red geysers host large-scale AGN-driven winds of ionized gas that may play a role in suppressing star formation at late times. In this work, we test the hypothesis that AGN activity is ultimately responsible for the red geyser phenomenon. We compare the nuclear radio activity of the red geysers to a matched control sample of galaxies of similar stellar mass, redshift, rest frame NUV–r color and axis ratio. and additionally, control for the presence of ionized gas. We have used 1.4 GHz radio continuum data from the VLA FIRST Survey to stack the radio flux from the red geyser sample and control sample. We find that the red geysers have a higher average radio flux than the control galaxies at > 3σ significance. Our sample is restricted to rest-frame NUV–r color > 5, thus ruling out possible radio emission due to star formation activity. We conclude that red geysers are associated with more active AGN, supporting a feedback picture in which episodic AGN activity drives large-scale but relatively weak ionized winds in many in many early-type galaxies.

  5. Cosmological Parameter Estimation with Large Scale Structure Observations

    CERN Document Server

    Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien

    2014-01-01

    We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.

  6. Large scale Brownian dynamics of confined suspensions of rigid particles

    Science.gov (United States)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose

  7. The Effect of Large Scale Salinity Gradient on Langmuir Turbulence

    Science.gov (United States)

    Fan, Y.; Jarosz, E.; Yu, Z.; Jensen, T.; Sullivan, P. P.; Liang, J.

    2017-12-01

    Langmuir circulation (LC) is believed to be one of the leading order causes of turbulent mixing in the upper ocean. It is important for momentum and heat exchange across the mixed layer (ML) and directly impact the dynamics and thermodynamics in the upper ocean and lower atmosphere including the vertical distributions of chemical, biological, optical, and acoustic properties. Based on Craik and Leibovich (1976) theory, large eddy simulation (LES) models have been developed to simulate LC in the upper ocean, yielding new insights that could not be obtained from field observations and turbulent closure models. Due its high computational cost, LES models are usually limited to small domain sizes and cannot resolve large-scale flows. Furthermore, most LES models used in the LC simulations use periodic boundary conditions in the horizontal direction, which assumes the physical properties (i.e. temperature and salinity) and expected flow patterns in the area of interest are of a periodically repeating nature so that the limited small LES domain is representative for the larger area. Using periodic boundary condition can significantly reduce computational effort in problems, and it is a good assumption for isotropic shear turbulence. However, LC is anisotropic (McWilliams et al 1997) and was observed to be modulated by crosswind tidal currents (Kukulka et al 2011). Using symmetrical domains, idealized LES studies also indicate LC could interact with oceanic fronts (Hamlington et al 2014) and standing internal waves (Chini and Leibovich, 2005). The present study expands our previous LES modeling investigations of Langmuir turbulence to the real ocean conditions with large scale environmental motion that features fresh water inflow into the study region. Large scale gradient forcing is introduced to the NCAR LES model through scale separation analysis. The model is applied to a field observation in the Gulf of Mexico in July, 2016 when the measurement site was impacted by

  8. The predictability of large-scale wind-driven flows

    Directory of Open Access Journals (Sweden)

    A. Mahadevan

    2001-01-01

    Full Text Available The singular values associated with optimally growing perturbations to stationary and time-dependent solutions for the general circulation in an ocean basin provide a measure of the rate at which solutions with nearby initial conditions begin to diverge, and hence, a measure of the predictability of the flow. In this paper, the singular vectors and singular values of stationary and evolving examples of wind-driven, double-gyre circulations in different flow regimes are explored. By changing the Reynolds number in simple quasi-geostrophic models of the wind-driven circulation, steady, weakly aperiodic and chaotic states may be examined. The singular vectors of the steady state reveal some of the physical mechanisms responsible for optimally growing perturbations. In time-dependent cases, the dominant singular values show significant variability in time, indicating strong variations in the predictability of the flow. When the underlying flow is weakly aperiodic, the dominant singular values co-vary with integral measures of the large-scale flow, such as the basin-integrated upper ocean kinetic energy and the transport in the western boundary current extension. Furthermore, in a reduced gravity quasi-geostrophic model of a weakly aperiodic, double-gyre flow, the behaviour of the dominant singular values may be used to predict a change in the large-scale flow, a feature not shared by an analogous two-layer model. When the circulation is in a strongly aperiodic state, the dominant singular values no longer vary coherently with integral measures of the flow. Instead, they fluctuate in a very aperiodic fashion on mesoscale time scales. The dominant singular vectors then depend strongly on the arrangement of mesoscale features in the flow and the evolved forms of the associated singular vectors have relatively short spatial scales. These results have several implications. In weakly aperiodic, periodic, and stationary regimes, the mesoscale energy

  9. Large scale Brownian dynamics of confined suspensions of rigid particles.

    Science.gov (United States)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A; Donev, Aleksandar

    2017-12-28

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose

  10. Climatological context for large-scale coral bleaching

    Science.gov (United States)

    Barton, A. D.; Casey, K. S.

    2005-12-01

    Large-scale coral bleaching was first observed in 1979 and has occurred throughout virtually all of the tropics since that time. Severe bleaching may result in the loss of live coral and in a decline of the integrity of the impacted coral reef ecosystem. Despite the extensive scientific research and increased public awareness of coral bleaching, uncertainties remain about the past and future of large-scale coral bleaching. In order to reduce these uncertainties and place large-scale coral bleaching in the longer-term climatological context, specific criteria and methods for using historical sea surface temperature (SST) data to examine coral bleaching-related thermal conditions are proposed by analyzing three, 132 year SST reconstructions: ERSST, HadISST1, and GISST2.3b. These methodologies are applied to case studies at Discovery Bay, Jamaica (77.27°W, 18.45°N), Sombrero Reef, Florida, USA (81.11°W, 24.63°N), Academy Bay, Galápagos, Ecuador (90.31°W, 0.74°S), Pearl and Hermes Reef, Northwest Hawaiian Islands, USA (175.83°W, 27.83°N), Midway Island, Northwest Hawaiian Islands, USA (177.37°W, 28.25°N), Davies Reef, Australia (147.68°E, 18.83°S), and North Male Atoll, Maldives (73.35°E, 4.70°N). The results of this study show that (1) The historical SST data provide a useful long-term record of thermal conditions in reef ecosystems, giving important insight into the thermal history of coral reefs and (2) While coral bleaching and anomalously warm SSTs have occurred over much of the world in recent decades, case studies in the Caribbean, Northwest Hawaiian Islands, and parts of other regions such as the Great Barrier Reef exhibited SST conditions and cumulative thermal stress prior to 1979 that were comparable to those conditions observed during the strong, frequent coral bleaching events since 1979. This climatological context and knowledge of past environmental conditions in reef ecosystems may foster a better understanding of how coral reefs will

  11. The Effective Field Theory of Large Scale Structures at two loops

    International Nuclear Information System (INIS)

    Carrasco, John Joseph M.; Foreman, Simon; Green, Daniel; Senatore, Leonardo

    2014-01-01

    Large scale structure surveys promise to be the next leading probe of cosmological information. It is therefore crucial to reliably predict their observables. The Effective Field Theory of Large Scale Structures (EFTofLSS) provides a manifestly convergent perturbation theory for the weakly non-linear regime of dark matter, where correlation functions are computed in an expansion of the wavenumber k of a mode over the wavenumber associated with the non-linear scale k NL . Since most of the information is contained at high wavenumbers, it is necessary to compute higher order corrections to correlation functions. After the one-loop correction to the matter power spectrum, we estimate that the next leading one is the two-loop contribution, which we compute here. At this order in k/k NL , there is only one counterterm in the EFTofLSS that must be included, though this term contributes both at tree-level and in several one-loop diagrams. We also discuss correlation functions involving the velocity and momentum fields. We find that the EFTofLSS prediction at two loops matches to percent accuracy the non-linear matter power spectrum at redshift zero up to k∼ 0.6 h Mpc −1 , requiring just one unknown coefficient that needs to be fit to observations. Given that Standard Perturbation Theory stops converging at redshift zero at k∼ 0.1 h Mpc −1 , our results demonstrate the possibility of accessing a factor of order 200 more dark matter quasi-linear modes than naively expected. If the remaining observational challenges to accessing these modes can be addressed with similar success, our results show that there is tremendous potential for large scale structure surveys to explore the primordial universe

  12. Challenges and options for large scale integration of wind power

    International Nuclear Information System (INIS)

    Tande, John Olav Giaever

    2006-01-01

    Challenges and options for large scale integration of wind power are examined. Immediate challenges are related to weak grids. Assessment of system stability requires numerical simulation. Models are being developed - validation is essential. Coordination of wind and hydro generation is a key for allowing more wind power capacity in areas with limited transmission corridors. For the case study grid depending on technology and control the allowed wind farm size is increased from 50 to 200 MW. The real life example from 8 January 2005 demonstrates that existing marked based mechanisms can handle large amounts of wind power. In wind integration studies it is essential to take account of the controllability of modern wind farms, the power system flexibility and the smoothing effect of geographically dispersed wind farms. Modern wind farms contribute to system adequacy - combining wind and hydro constitutes a win-win system (ml)

  13. Large Scale Simulation Platform for NODES Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

  14. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  15. Towards large-scale plasma-assisted synthesis of nanowires

    Science.gov (United States)

    Cvelbar, U.

    2011-05-01

    Large quantities of nanomaterials, e.g. nanowires (NWs), are needed to overcome the high market price of nanomaterials and make nanotechnology widely available for general public use and applications to numerous devices. Therefore, there is an enormous need for new methods or routes for synthesis of those nanostructures. Here plasma technologies for synthesis of NWs, nanotubes, nanoparticles or other nanostructures might play a key role in the near future. This paper presents a three-dimensional problem of large-scale synthesis connected with the time, quantity and quality of nanostructures. Herein, four different plasma methods for NW synthesis are presented in contrast to other methods, e.g. thermal processes, chemical vapour deposition or wet chemical processes. The pros and cons are discussed in detail for the case of two metal oxides: iron oxide and zinc oxide NWs, which are important for many applications.

  16. Preliminary design study of a large scale graphite oxidation loop

    International Nuclear Information System (INIS)

    Epel, L.G.; Majeski, S.J.; Schweitzer, D.G.; Sheehan, T.V.

    1979-08-01

    A preliminary design study of a large scale graphite oxidation loop was performed in order to assess feasibility and to estimate capital costs. The nominal design operates at 50 atmospheres helium and 1800 F with a graphite specimen 30 inches long and 10 inches in diameter. It was determined that a simple single walled design was not practical at this time because of a lack of commercially available thick walled high temperature alloys. Two alternative concepts, at reduced operating pressure, were investigated. Both were found to be readily fabricable to operate at 1800 F and capital cost estimates for these are included. A design concept, which is outside the scope of this study, was briefly considered

  17. Design Performance Standards for Large Scale Wind Farms

    DEFF Research Database (Denmark)

    Gordon, Mark

    2009-01-01

    This document presents, discusses and provides a general guide on electrical performance standard requirements for connection of large scale onshore wind farms into HV transmission networks. Experiences presented here refer mainly to technical requirements and issues encountered during the process...... of connection into the Eastern Australian power system under the Rules and guidelines set out by AEMC and NEMMCO (AEMO). Where applicable some international practices are also mentioned. Standards are designed to serve as a technical envelope under which wind farm proponents design the plant and maintain...... ongoing technical compliance of the plant during its operational lifetime. This report is designed to provide general technical information for the wind farm connection engineer to be aware of during the process of connection, registration and operation of wind power plants interconnected into the HV TSO...

  18. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik

    2013-01-01

    was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...... with this imbalance and to reduce its high dependence on oil production. For this reason, it is interesting to analyse the extent to which transport electrification can further the renewable energy integration. This paper quantifies this issue in Inner Mongolia, where the share of wind power in the electricity supply...... integration by 8%. The application of EVs benefits from saving both energy system cost and fuel cost. However, the negative consequences of decreasing energy system efficiency and increasing the CO2 emission should be noted when applying the hydrogen fuel cell vehicle (HFCV). The results also indicate...

  19. A mini review: photobioreactors for large scale algal cultivation.

    Science.gov (United States)

    Gupta, Prabuddha L; Lee, Seung-Mok; Choi, Hee-Jeong

    2015-09-01

    Microalgae cultivation has gained much interest in terms of the production of foods, biofuels, and bioactive compounds and offers a great potential option for cleaning the environment through CO2 sequestration and wastewater treatment. Although open pond cultivation is most affordable option, there tends to be insufficient control on growth conditions and the risk of contamination. In contrast, while providing minimal risk of contamination, closed photobioreactors offer better control on culture conditions, such as: CO2 supply, water supply, optimal temperatures, efficient exposure to light, culture density, pH levels, and mixing rates. For a large scale production of biomass, efficient photobioreactors are required. This review paper describes general design considerations pertaining to photobioreactor systems, in order to cultivate microalgae for biomass production. It also discusses the current challenges in designing of photobioreactors for the production of low-cost biomass.

  20. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    Science.gov (United States)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  1. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  2. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom

    2017-11-22

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  3. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  4. Neural Correlates of Unconsciousness in Large-Scale Brain Networks.

    Science.gov (United States)

    Mashour, George A; Hudetz, Anthony G

    2018-03-01

    The biological basis of consciousness is one of the most challenging and fundamental questions in 21st century science. A related pursuit aims to identify the neural correlates and causes of unconsciousness. We review current trends in the investigation of physiological, pharmacological, and pathological states of unconsciousness at the level of large-scale functional brain networks. We focus on the roles of brain connectivity, repertoire, graph-theoretical techniques, and neural dynamics in understanding the functional brain disconnections and reduced complexity that appear to characterize these states. Persistent questions in the field, such as distinguishing true correlates, linking neural scales, and understanding differential recovery patterns, are also addressed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Tidal power plant may develop into large-scale industry

    International Nuclear Information System (INIS)

    2001-01-01

    Hammerfest was the first city in Norway with hydroelectric power production and the first city in Northern Europe to have electric street lights. Recently, technologists within the city's electricity supply industry have suggested that Hammerfest should pioneer the field of tidal energy. The idea is to create a new Norwegian large-scale industry. The technology is being developed by the company Hammerfest Stroem. A complete plant is planned to be installed in Kvalsundet. It will include turbine, generator, converters, transmission to land and delivery to the network. Once fully developed, in 2004, the plant will be sold. The company expects to install similar plants elsewhere in Norway and abroad. It is calculated that for a tidewater current of 2.5 m/s, the worldwide potential is about 450 TWh

  6. Magnetization of fluid phonons and large-scale curvature perturbations

    CERN Document Server

    Giovannini, Massimo

    2014-01-01

    The quasinormal mode of a gravitating and magnetized fluid in a spatially flat, isotropic and homogeneous cosmological background is derived in the presence of the fluid sources of anisotropic stress and of the entropic fluctuations of the plasma. The obtained gauge-invariant description involves a system of two coupled differential equations whose physical content is analyzed in all the most relevant situations. The Cauchy problem of large-scale curvature perturbations during the radiation dominated stage of expansion can be neatly formulated and its general solution is shown to depend on five initial data assigned when the relevant physical wavelengths are larger than the particle horizon. The consequences of this approach are explored.

  7. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , whereas Chapter 4 indicates that sugarcane outgrowers’ easy access to credit and technology and their high productivity compared to the plantation does not necessarily improve their income and asset stocks particularly when participation in outgrower schemes is mandatory, the buyer has monopsony market...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land....... On other hand, the results in Chapter 4 show that participation in a sugarcane outgrower scheme has a negative impact on households’ income and total asset stock. From the findings in Chapter 3 it can be concluded that outgrower-operated plots have higher productivity than factory-operated plantations...

  8. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  9. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  10. Testing Inflation with Large Scale Structure: Connecting Hopes with Reality

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez, Marcello [Univ. of Toronto, ON (Canada); Baldauf, T. [Inst. of Advanced Studies, Princeton, NJ (United States); Bond, J. Richard [Univ. of Toronto, ON (Canada); Canadian Inst. for Advanced Research, Toronto, ON (Canada); Dalal, N. [Univ. of Illinois, Urbana-Champaign, IL (United States); Putter, R. D. [Jet Propulsion Lab., Pasadena, CA (United States); California Inst. of Technology (CalTech), Pasadena, CA (United States); Dore, O. [Jet Propulsion Lab., Pasadena, CA (United States); California Inst. of Technology (CalTech), Pasadena, CA (United States); Green, Daniel [Univ. of Toronto, ON (Canada); Canadian Inst. for Advanced Research, Toronto, ON (Canada); Hirata, Chris [The Ohio State Univ., Columbus, OH (United States); Huang, Zhiqi [Univ. of Toronto, ON (Canada); Huterer, Dragan [Univ. of Michigan, Ann Arbor, MI (United States); Jeong, Donghui [Pennsylvania State Univ., University Park, PA (United States); Johnson, Matthew C. [York Univ., Toronto, ON (Canada); Perimeter Inst., Waterloo, ON (Canada); Krause, Elisabeth [Stanford Univ., CA (United States); Loverde, Marilena [Univ. of Chicago, IL (United States); Meyers, Joel [Univ. of Toronto, ON (Canada); Meeburg, Daniel [Univ. of Toronto, ON (Canada); Senatore, Leonardo [Stanford Univ., CA (United States); Shandera, Sarah [Pennsylvania State Univ., University Park, PA (United States); Silverstein, Eva [Stanford Univ., CA (United States); Slosar, Anze [Brookhaven National Lab. (BNL), Upton, NY (United States); Smith, Kendrick [Perimeter Inst., Waterloo, Toronto, ON (Canada); Zaldarriaga, Matias [Univ. of Toronto, ON (Canada); Assassi, Valentin [Cambridge Univ. (United Kingdom); Braden, Jonathan [Univ. of Toronto, ON (Canada); Hajian, Amir [Univ. of Toronto, ON (Canada); Kobayashi, Takeshi [Perimeter Inst., Waterloo, Toronto, ON (Canada); Univ. of Toronto, ON (Canada); Stein, George [Univ. of Toronto, ON (Canada); Engelen, Alexander van [Univ. of Toronto, ON (Canada)

    2014-12-15

    The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large-scale structure is, however, from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude f$loc\\atop{NL}$ (f$eq\\atop{NL}$), natural target levels of sensitivity are Δf$loc, eq\\atop{NL}$ ≃ 1. We highlight that such levels are within reach of future surveys by measuring 2-, 3- and 4-point statistics of the galaxy spatial distribution. This paper summarizes a workshop held at CITA (University of Toronto) on October 23-24, 2014.

  11. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  12. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  13. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  14. Magnetic Properties of Large-Scale Nanostructured Graphene Systems

    DEFF Research Database (Denmark)

    Gregersen, Søren Schou

    The on-going progress in two-dimensional (2D) materials and nanostructure fabrication motivates the study of altered and combined materials. Graphene—the most studied material of the 2D family—displays unique electronic and spintronic properties. Exceptionally high electron mobilities, that surpass...... those in conventional materials such as silicon, make graphene a very interesting material for high-speed electronics. Simultaneously, long spin-diffusion lengths and spin-life times makes graphene an eligible spin-transport channel. In this thesis, we explore fundamental features of nanostructured...... graphene systems using large-scale modeling techniques. Graphene perforations, or antidots, have received substantial interest in the prospect of opening large band gaps in the otherwise gapless graphene. Motivated by recent improvements of fabrication processes, such as forming graphene antidots and layer...

  15. Automatic Installation and Configuration for Large Scale Farms

    CERN Document Server

    Novák, J

    2005-01-01

    Since the early appearance of commodity hardware, the utilization of computers rose rapidly, and they became essential in all areas of life. Soon it was realized that nodes are able to work cooperatively, in order to solve new, more complex tasks. This conception got materialized in coherent aggregations of computers called farms and clusters. Collective application of nodes, being efficient and economical, was adopted in education, research and industry before long. But maintainance, especially in large scale, appeared as a problem to be resolved. New challenges needed new methods and tools. Development work has been started to build farm management applications and frameworks. In the first part of the thesis, these systems are introduced. After a general description of the matter, a comparative analysis of different approaches and tools illustrates the practical aspects of the theoretical discussion. CERN, the European Organization of Nuclear Research is the largest Particle Physics laboratory in the world....

  16. Large-scale visualization system for grid environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of Japan Atomic Energy Agency (CCSE/JAEA) has been conducting R and Ds of distributed computing (grid computing) environments: Seamless Thinking Aid (STA), Information Technology Based Laboratory (ITBL) and Atomic Energy Grid InfraStructure (AEGIS). In these R and Ds, we have developed the visualization technology suitable for the distributed computing environment. As one of the visualization tools, we have developed the Parallel Support Toolkit (PST) which can execute the visualization process parallely on a computer. Now, we improve PST to be executable simultaneously on multiple heterogeneous computers using Seamless Thinking Aid Message Passing Interface (STAMPI). STAMPI, we have developed in these R and Ds, is the MPI library executable on a heterogeneous computing environment. The improvement realizes the visualization of extremely large-scale data and enables more efficient visualization processes in a distributed computing environment. (author)

  17. Multidimensional quantum entanglement with large-scale integrated optics.

    Science.gov (United States)

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G

    2018-04-20

    The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  18. Technologies and challenges in large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Engholm-Keller, Kasper; Larsen, Martin Røssel

    2013-01-01

    Phosphorylation, the reversible addition of a phosphate group to amino acid side chains of proteins, is a fundamental regulator of protein activity, stability, and molecular interactions. Most cellular processes, such as inter- and intracellular signaling, protein synthesis, degradation......, and apoptosis, rely on phosphorylation. This PTM is thus involved in many diseases, rendering localization and assessment of extent of phosphorylation of major scientific interest. MS-based phosphoproteomics, which aims at describing all phosphorylation sites in a specific type of cell, tissue, or organism, has...... become the main technique for discovery and characterization of phosphoproteins in a nonhypothesis driven fashion. In this review, we describe methods for state-of-the-art MS-based analysis of protein phosphorylation as well as the strategies employed in large-scale phosphoproteomic experiments...

  19. A large-scale evaluation of computational protein function prediction.

    Science.gov (United States)

    Radivojac, Predrag; Clark, Wyatt T; Oron, Tal Ronnen; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kaßner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Boehm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas A; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-03-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based critical assessment of protein function annotation (CAFA) experiment. Fifty-four methods representing the state of the art for protein function prediction were evaluated on a target set of 866 proteins from 11 organisms. Two findings stand out: (i) today's best protein function prediction algorithms substantially outperform widely used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is considerable need for improvement of currently available tools.

  20. Safeguarding of large scale reprocessing and MOX plants

    International Nuclear Information System (INIS)

    Howsley, R.; Burrows, B.; Longevialle, H. de; Kuroi, H.; Izumi, A.

    1997-01-01

    In May 97, the IAEA Board of Governors approved the final measures of the ''93+2'' safeguards strengthening programme, thus improving the international non-proliferation regime by enhancing the effectiveness and efficiency of safeguards verification. These enhancements are not however, a revolution in current practices, but rather an important step in the continuous evolution of the safeguards system. The principles embodied in 93+2, for broader access to information and increased physical access already apply, in a pragmatic way, to large scale reprocessing and MOX fabrication plants. In these plants, qualitative measures and process monitoring play an important role in addition to accountancy and material balance evaluations in attaining the safeguard's goals. This paper will reflect on the safeguards approaches adopted for these large bulk handling facilities and draw analogies, conclusions and lessons for the forthcoming implementation of the 93+2 Programme. (author)

  1. Large-scale demonstration of waste solidification in saltstone

    International Nuclear Information System (INIS)

    McIntyre, P.F.; Oblath, S.B.; Wilhite, E.L.

    1988-05-01

    The saltstone lysimeters are a large scale demonstration of a disposal concept for decontaminated salt solution resulting from in-tank processing of defense waste. The lysimeter experiment has provided data on the leaching behavior of large saltstone monoliths under realistic field conditions. The results also will be used to compare the effect of capping the wasteform on contaminant release. Biweekly monitoring of sump leachate from three lysimeters has continued on a routine basis for approximately 3 years. An uncapped lysimeter has shown the highest levels of nitrate and 99 Tc release. Gravel and clay capped lysimeters have shown levels equivalent to or slightly higher than background rainwater levels. Mathematical model predictions have been compared to lysimeter results. The models will be applied to predict the impact of saltstone disposal on groundwater quality. 9 refs., 5 figs., 3 tabs

  2. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram

    2017-03-06

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  3. Ocean fertilization experiments may initiate a large scale phytoplankton bloom

    Science.gov (United States)

    Neufeld, Zoltán; Haynes, Peter H.; Garçon, Véronique; Sudre, Joël

    2002-06-01

    Oceanic plankton plays an important role in the marine food chain and through its significant contribution to the global carbon cycle can also influence the climate. Plankton bloom is a sudden rapid increase of the population. It occurs naturally in the North Atlantic as a result of seasonal changes. Ocean fertilization experiments have shown that supply of iron, an important trace element, can trigger a phytoplankton bloom in oceanic regions with low natural phytoplankton density. Here we use a simple mathematical model of the combined effects of stirring by ocean eddies and plankton evolution to consider the impact of a transient local perturbation, e.g. in the form of iron enrichment as in recent `ocean fertilization' experiments. The model not only explains aspects of the bloom observed in such experiments but predicts the unexpected outcome of a large scale bloom that in its extent could be comparable to the spring bloom in the North Atlantic.

  4. Large-scale Ising-machines composed of magnetic neurons

    Science.gov (United States)

    Mizushima, Koichi; Goto, Hayato; Sato, Rie

    2017-10-01

    We propose Ising-machines composed of magnetic neurons, that is, magnetic bits in a recording track. In large-scale machines, the sizes of both neurons and synapses need to be reduced, and neat and smart connections among neurons are also required to achieve all-to-all connectivity among them. These requirements can be fulfilled by adopting magnetic recording technologies such as race-track memories and skyrmion tracks because the area of a magnetic bit is almost two orders of magnitude smaller than that of static random access memory, which has normally been used as a semiconductor neuron, and the smart connections among neurons are realized by using the read and write methods of these technologies.

  5. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  6. Complex modular structure of large-scale brain networks

    Science.gov (United States)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  7. Visualizing large-scale uncertainty in astrophysical data.

    Science.gov (United States)

    Li, Hongwei; Fu, Chi-Wing; Li, Yinggang; Hanson, Andrew

    2007-01-01

    Visualization of uncertainty or error in astrophysical data is seldom available in simulations of astronomical phenomena, and yet almost all rendered attributes possess some degree of uncertainty due to observational error. Uncertainties associated with spatial location typically vary signicantly with scale and thus introduce further complexity in the interpretation of a given visualization. This paper introduces effective techniques for visualizing uncertainty in large-scale virtual astrophysical environments. Building upon our previous transparently scalable visualization architecture, we develop tools that enhance the perception and comprehension of uncertainty across wide scale ranges. Our methods include a unified color-coding scheme for representing log-scale distances and percentage errors, an ellipsoid model to represent positional uncertainty, an ellipsoid envelope model to expose trajectory uncertainty, and a magic-glass design supporting the selection of ranges of log-scale distance and uncertainty parameters, as well as an overview mode and a scalable WIM tool for exposing the magnitudes of spatial context and uncertainty.

  8. Lightweight computational steering of very large scale molecular dynamics simulations

    International Nuclear Information System (INIS)

    Beazley, D.M.

    1996-01-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages

  9. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    Catastrophic storms and storm surges induce rapid and substantial changes along sandy barrier coasts, potentially causing severe environmental and economic damage. Coastal impacts of modern storms are associated with washover deposition, dune erosion, barrier breaching, and coastline and shoreface...... erosion. Little is however known about the impact of major storms and their post-storm coastal recovery on geologic and historic evolution of barrier systems. We apply high-resolution optically stimulated luminescence dating on a barrier system in the Wadden Sea (Denmark) and show that 5 to 8 meters...... of marine sand accumulated in an aggrading-prograding shoal and on a prograding shoreface during and within 3 to 4 decades (“healing phase”) after the most destructive storm documented for the Wadden Sea. Furthermore, we show that the impact of this storm caused large-scale shoreline erosion and barrier...

  10. Large-Scale Advanced Prop-Fan (LAP) blade design

    Science.gov (United States)

    Violette, John A.; Sullivan, William E.; Turnberg, Jay E.

    1984-01-01

    This report covers the design analysis of a very thin, highly swept, propeller blade to be used in the Large-Scale Advanced Prop-Fan (LAP) test program. The report includes: design requirements and goals, a description of the blade configuration which meets requirements, a description of the analytical methods utilized/developed to demonstrate compliance with the requirements, and the results of these analyses. The methods described include: finite element modeling, predicted aerodynamic loads and their application to the blade, steady state and vibratory response analyses, blade resonant frequencies and mode shapes, bird impact analysis, and predictions of stalled and unstalled flutter phenomena. Summarized results include deflections, retention loads, stress/strength comparisons, foreign object damage resistance, resonant frequencies and critical speed margins, resonant vibratory mode shapes, calculated boundaries of stalled and unstalled flutter, and aerodynamic and acoustic performance calculations.

  11. Multidimensional quantum entanglement with large-scale integrated optics

    DEFF Research Database (Denmark)

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong

    2018-01-01

    -dimensional entanglement. A programmable bipartite entangled system is realized with dimension up to 15 × 15 on a large-scale silicon-photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality......The ability to control multidimensional quantum systems is key for the investigation of fundamental science and for the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control and analyze high...... and controllability of our multidimensional technology, and further exploit these abilities to demonstrate key quantum applications experimentally unexplored before, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development...

  12. Synchronization control for large-scale network systems

    CERN Document Server

    Wu, Yuanqing; Su, Hongye; Shi, Peng; Wu, Zheng-Guang

    2017-01-01

    This book provides recent advances in analysis and synthesis of Large-scale network systems (LSNSs) with sampled-data communication and non-identical nodes. In its first chapter of the book presents an introduction to Synchronization of LSNSs and Algebraic Graph Theory as well as an overview of recent developments of LSNSs with sampled data control or output regulation control. The main text of the book is organized into two main parts - Part I: LSNSs with sampled-data communication and Part II: LSNSs with non-identical nodes. This monograph provides up-to-date advances and some recent developments in the analysis and synthesis issues for LSNSs with sampled-data communication and non-identical nodes. It describes the constructions of the adaptive reference generators in the first stage and the robust regulators in the second stage. Examples are presented to show the effectiveness of the proposed design techniques.

  13. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV......, telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties....

  14. Computational solutions to large-scale data management and analysis.

    Science.gov (United States)

    Schadt, Eric E; Linderman, Michael D; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P

    2010-09-01

    Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist - such as cloud and heterogeneous computing - to successfully tackle our big data problems.

  15. Future hydrogen markets for large-scale hydrogen production systems

    International Nuclear Information System (INIS)

    Forsberg, Charles W.

    2007-01-01

    The cost of delivered hydrogen includes production, storage, and distribution. For equal production costs, large users (>10 6 m 3 /day) will favor high-volume centralized hydrogen production technologies to avoid collection costs for hydrogen from widely distributed sources. Potential hydrogen markets were examined to identify and characterize those markets that will favor large-scale hydrogen production technologies. The two high-volume centralized hydrogen production technologies are nuclear energy and fossil energy with carbon dioxide sequestration. The potential markets for these technologies are: (1) production of liquid fuels (gasoline, diesel and jet) including liquid fuels with no net greenhouse gas emissions and (2) peak electricity production. The development of high-volume centralized hydrogen production technologies requires an understanding of the markets to (1) define hydrogen production requirements (purity, pressure, volumes, need for co-product oxygen, etc.); (2) define and develop technologies to use the hydrogen, and (3) create the industrial partnerships to commercialize such technologies. (author)

  16. Probing the dark side of the Universe with weak gravitational lensing effects

    International Nuclear Information System (INIS)

    Fu Li-Ping; Fan Zu-Hui

    2014-01-01

    Arising from gravitational deflections of light rays by large-scale structures in the Universe, weak-lensing effects have been recognized as one of the most important probes in cosmological studies. In this paper, we review the main progress in weak-lensing analyses, and discuss the challenges in future investigations aiming to understand the dark side of the Universe with unprecedented precisions. (invited reviews)

  17. Large Scale Emerging Properties from Non Hamiltonian Complex Systems

    Directory of Open Access Journals (Sweden)

    Marco Bianucci

    2017-06-01

    Full Text Available The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO.

  18. Investigation of the large scale regional hydrogeological situation at Ceberg

    International Nuclear Information System (INIS)

    Boghammar, A.; Grundfelt, B.; Hartley, L.

    1997-11-01

    The present study forms part of the large-scale groundwater flow studies within the SR 97 project. The site of interest is Ceberg. Within the present study two different regional scale groundwater models have been constructed, one large regional model with an areal extent of about 300 km 2 and one semi-regional model with an areal extent of about 50 km 2 . Different types of boundary conditions have been applied to the models. Topography driven pressures, constant infiltration rates, non-linear infiltration combined specified pressure boundary conditions, and transfer of groundwater pressures from the larger model to the semi-regional model. The present model has shown that: -Groundwater flow paths are mainly local. Large-scale groundwater flow paths are only seen below the depth of the hypothetical repository (below 500 meters) and are very slow. -Locations of recharge and discharge, to and from the site area are in the close vicinity of the site. -The low contrast between major structures and the rock mass means that the factor having the major effect on the flowpaths is the topography. -A sufficiently large model, to incorporate the recharge and discharge areas to the local site is in the order of kilometres. -A uniform infiltration rate boundary condition does not give a good representation of the groundwater movements in the model. -A local site model may be located to cover the site area and a few kilometers of the surrounding region. In order to incorporate all recharge and discharge areas within the site model, the model will be somewhat larger than site scale models at other sites. This is caused by the fact that the discharge areas are divided into three distinct areas to the east, south and west of the site. -Boundary conditions may be supplied to the site model by means of transferring groundwater pressures obtained with the semi-regional model

  19. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  20. Ecohydrological modeling for large-scale environmental impact assessment.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Probing Inflation Using Galaxy Clustering On Ultra-Large Scales

    Science.gov (United States)

    Dalal, Roohi; de Putter, Roland; Dore, Olivier

    2018-01-01

    A detailed understanding of curvature perturbations in the universe is necessary to constrain theories of inflation. In particular, measurements of the local non-gaussianity parameter, flocNL, enable us to distinguish between two broad classes of inflationary theories, single-field and multi-field inflation. While most single-field theories predict flocNL ≈ ‑5/12 (ns -1), in multi-field theories, flocNL is not constrained to this value and is allowed to be observably large. Achieving σ(flocNL) = 1 would give us discovery potential for detecting multi-field inflation, while finding flocNL=0 would rule out a good fraction of interesting multi-field models. We study the use of galaxy clustering on ultra-large scales to achieve this level of constraint on flocNL. Upcoming surveys such as Euclid and LSST will give us galaxy catalogs from which we can construct the galaxy power spectrum and hence infer a value of flocNL. We consider two possible methods of determining the galaxy power spectrum from a catalog of galaxy positions: the traditional Feldman Kaiser Peacock (FKP) Power Spectrum Estimator, and an Optimal Quadratic Estimator (OQE). We implemented and tested each method using mock galaxy catalogs, and compared the resulting constraints on flocNL. We find that the FKP estimator can measure flocNL in an unbiased way, but there remains room for improvement in its precision. We also find that the OQE is not computationally fast, but remains a promising option due to its ability to isolate the power spectrum at large scales. We plan to extend this research to study alternative methods, such as pixel-based likelihood functions. We also plan to study the impact of general relativistic effects at these scales on our ability to measure flocNL.

  2. Identifying large-scale brain networks in fragile X syndrome.

    Science.gov (United States)

    Hall, Scott S; Jiang, Heidi; Reiss, Allan L; Greicius, Michael D

    2013-11-01

    Fragile X syndrome (FXS) is an X-linked neurogenetic disorder characterized by a cognitive and behavioral phenotype resembling features of autism spectrum disorder. Until now, research has focused largely on identifying regional differences in brain structure and function between individuals with FXS and various control groups. Very little is known about the large-scale brain networks that may underlie the cognitive and behavioral symptoms of FXS. To identify large-scale, resting-state networks in FXS that differ from control individuals matched on age, IQ, and severity of behavioral and cognitive symptoms. Cross-sectional, in vivo neuroimaging study conducted in an academic medical center. Participants (aged 10-23 years) included 17 males and females with FXS and 16 males and females serving as controls. Univariate voxel-based morphometric analyses, fractional amplitude of low-frequency fluctuations (fALFF) analysis, and group-independent component analysis with dual regression. Patients with FXS showed decreased functional connectivity in the salience, precuneus, left executive control, language, and visuospatial networks compared with controls. Decreased fALFF in the bilateral insular, precuneus, and anterior cingulate cortices also was found in patients with FXS compared with control participants. Furthermore, fALFF in the left insular cortex was significantly positively correlated with IQ in patients with FXS. Decreased gray matter density, resting-state connectivity, and fALFF converged in the left insular cortex in patients with FXS. Fragile X syndrome results in widespread reductions in functional connectivity across multiple cognitive and affective brain networks. Converging structural and functional abnormalities in the left insular cortex, a region also implicated in individuals diagnosed with autism spectrum disorder, suggests that insula integrity and connectivity may be compromised in FXS. This method could prove useful in establishing an imaging

  3. High Fidelity Simulations of Large-Scale Wireless Networks

    Energy Technology Data Exchange (ETDEWEB)

    Onunkwo, Uzoma [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Benz, Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulations (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.

  4. An effective method of large scale ontology matching.

    Science.gov (United States)

    Diallo, Gayo

    2014-01-01

    We are currently facing a proliferation of heterogeneous biomedical data sources accessible through various knowledge-based applications. These data are annotated by increasingly extensive and widely disseminated knowledge organisation systems ranging from simple terminologies and structured vocabularies to formal ontologies. In order to solve the interoperability issue, which arises due to the heterogeneity of these ontologies, an alignment task is usually performed. However, while significant effort has been made to provide tools that automatically align small ontologies containing hundreds or thousands of entities, little attention has been paid to the matching of large sized ontologies in the life sciences domain. We have designed and implemented ServOMap, an effective method for large scale ontology matching. It is a fast and efficient high precision system able to perform matching of input ontologies containing hundreds of thousands of entities. The system, which was included in the 2012 and 2013 editions of the Ontology Alignment Evaluation Initiative campaign, performed very well. It was ranked among the top systems for the large ontologies matching. We proposed an approach for large scale ontology matching relying on Information Retrieval (IR) techniques and the combination of lexical and machine learning contextual similarity computing for the generation of candidate mappings. It is particularly adapted to the life sciences domain as many of the ontologies in this domain benefit from synonym terms taken from the Unified Medical Language System and that can be used by our IR strategy. The ServOMap system we implemented is able to deal with hundreds of thousands entities with an efficient computation time.

  5. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  6. CMB Beam Systematics: Impact on Lensing Parameter Estimation

    OpenAIRE

    Miller, N. J.; Shimon, M.; Keating, B. G.

    2008-01-01

    The CMB's B-mode polarization provides a handle on several cosmological parameters most notably the tensor-to-scalar ratio, $r$, and is sensitive to parameters which govern the growth of large scale structure (LSS) and evolution of the gravitational potential. The primordial gravitational-wave- and secondary lensing-induced B-mode signals are very weak and therefore prone to various foregrounds and systematics. In this work we use Fisher-matrix-based estimations and apply, for the first time,...

  7. Observation of large-scale traveling ionospheric disturbances of auroral origin by global GPS networks

    Science.gov (United States)

    Afraimovich, Edward L.; Kosogorov, Eugene A.; Leonovich, Ludmila A.; Palamartchouk, Kirill S.; Perevalova, Natalia P.; Pirog, Olga M.

    2000-10-01

    The intention in this paper is to investigate the form and dynamics of large-scale traveling ionospheric disturbances (LS TIDs) of auroral origin. We have devised a technique for determining LS TID parameters using GPS arrays whose elements can be selected from a large set of GPS stations forming part of the international GPS network. The method was used to determine LS TID parameters during a strong magnetic storm of September 25, 1998. The North-American sector where many GPS stations are available, and also the time interval 00:00-06:00 UT characterized by a maximum value of the derivative Dst were used in the analysis. The study revealed that this period of time was concurrent with the formation of the main ionospheric trough (MIT) with a conspicuous southward wall in the range of geographic latitudes 50-60° and the front width of no less than 7500 km. The auroral disturbance-induced large-scale solitary wave with a duration of about 1 hour and the front width of at least 3700 km propagated in the equatorward direction to a distance of no less than 2000-3000 km with the mean velocity of about 300 m/s. The wave front behaved as if it `curled' to the west in longitude where the local time was around noon. Going toward the local nighttime, the propagation direction progressively approximated an equatorward direction.

  8. Determining parameters of large-scale traveling ionospheric disturbances of auroral origin using GPS-arrays

    Science.gov (United States)

    Afraimovich, E. L.; Kosogorov, E. A.; Leonovich, L. A.; Palamartchouk, K. S.; Perevalova, N. P.; Pirog, O. M.

    2000-05-01

    The intention in this paper is to investigate the form and dynamics of large-scale traveling ionospheric disturbances (LS TIDs) of auroral origin. We have devised a technique for determining LS TID parameters using GPS-arrays whose elements can be selected from a large set of GPS stations forming part of the International GPS Service network. The method was used to determine LS TID parameters during a strong magnetic storm of September 25, 1998. The North-American sector where many GPS stations are available, and also the time interval 00:00-06:00 UT characterized by a maximum value of the derivative Dst were used in the analysis. The study revealed that this period of time was concurrent with the formation of the main ionospheric trough with a conspicuous southward wall in the range of geographic latitudes 50-60° and the front width of no less than 7500 km. The auroral disturbance-induced large-scale solitary wave with a duration of about 1 h and the front width of at least 3700 km propagated in the equatorward direction to a distance of no less than 2000-3000 km with the mean velocity of about 300 m/s. The wave front behaved as if it `curled' to the west in longitude where the local time was around afternoon. Going toward the local nighttime, the propagation direction progressively approximated an equatorward direction.

  9. Large-scale volumetric pressure from tomographic PTV with HFSB tracers

    Science.gov (United States)

    Schneiders, Jan F. G.; Caridi, Giuseppe C. A.; Sciacchitano, Andrea; Scarano, Fulvio

    2016-11-01

    The instantaneous volumetric pressure in the near-wake of a truncated cylinder is measured by use of tomographic particle tracking velocimetry (PTV) using helium-filled soap bubbles (HFSB) as tracers. The measurement volume is several orders of magnitude larger than that reported in tomographic experiments dealing with pressure from particle image velocimetry (PIV). The near-wake of a truncated cylinder installed on a flat plate ( Re D = 3.5 × 104) features both wall-bounded turbulence and large-scale unsteady flow separation. The instantaneous pressure is calculated from the time-resolved 3D velocity distribution by invoking the momentum equation. The experiments are conducted simultaneously with surface pressure measurements intended for validation of the technique. The study shows that time-averaged pressure and root-mean-squared pressure fluctuations can be accurately measured both in the fluid domain and at the solid surface by large-scale tomographic PTV with HFSB as tracers, with significant reduction in manufacturing complexity for the wind-tunnel model and circumventing the need to install pressure taps or transducers. The measurement over a large volume eases the extension toward the free-stream regime, providing a reliable boundary condition for the solution of the Poisson equation for pressure. The work demonstrates, in the case of the flow past a truncated cylinder, the use of HFSB tracer particles for pressure measurement in air flows in a measurement volume that is two orders of magnitude larger than that of conventional tomographic PIV.

  10. The Effective Field Theory of Large Scale Structures at Two Loops

    CERN Document Server

    Carrasco, John Joseph M.; Green, Daniel; Senatore, Leonardo

    2014-01-01

    Large scale structure surveys promise to be the next leading probe of cosmological information. It is therefore crucial to reliably predict their observables. The Effective Field Theory of Large Scale Structures (EFTofLSS) provides a manifestly convergent perturbation theory for the weakly non-linear regime of dark matter, where correlation functions are computed in an expansion of the wavenumber k of a mode over the wavenumber associated with the non-linear scale k_nl. Since most of the information is contained at high wavenumbers, it is necessary to compute higher order corrections to correlation functions. After the one-loop correction to the matter power spectrum, we estimate that the next leading one is the two-loop contribution, which we compute here. At this order in k/k_nl, there is only one counterterm in the EFTofLSS that must be included, though this term contributes both at tree-level and in several one-loop diagrams. We also discuss correlation functions involving the velocity and momentum fields...

  11. Inception of Klebanoff streaks and large-scale motions in transitional and fully turbulent boundary layers

    Science.gov (United States)

    Lee, Jin; Zaki, Tamer

    2017-11-01

    Transitional boundary layers feature long coherent motions of streamwise velocity fluctuation, u', in both the laminar and turbulent regions. In the former, Klebanoff streaks amplify and become seats for breakdown to turbulence. In the fully turbulent region, large-scale motions contribute appreciably to the turbulence energy and shear stresses. Direct numerical simulation (DNS) of boundary-layer bypass transition over a flat plate with a leading edge is performed. Instantaneous realizations of spatially and temporally resolved fields are stored in a database. Structure identification techniques are used to identify these coherent flow structures. The inception rate, lifetime and amplification rate of Klebanoff streaks are evaluated in the laminar region, and conditional averaging is used to examine the early stages of streak formation. Structure identification and tracking is also used to study the inception of large-scale coherent motion in the nascent turbulent spots and fully turbulent boundary layer downstream. This work has been partially supported by the National Science Foundation (NSF, Grant 1605404). The computations were performed using the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by NSF (Grant ACI-1053575).

  12. Large scale integration of intermittent renewable energy sources in the Greek power sector

    International Nuclear Information System (INIS)

    Voumvoulakis, Emmanouil; Asimakopoulou, Georgia; Danchev, Svetoslav; Maniatis, George; Tsakanikas, Aggelos

    2012-01-01

    As a member of the European Union, Greece has committed to achieve ambitious targets for the penetration of renewable energy sources (RES) in gross electricity consumption by 2020. Large scale integration of RES requires a suitable mixture of compatible generation units, in order to deal with the intermittency of wind velocity and solar irradiation. The scope of this paper is to examine the impact of large scale integration of intermittent energy sources, required to meet the 2020 RES target, on the generation expansion plan, the fuel mix and the spinning reserve requirements of the Greek electricity system. We perform hourly simulation of the intermittent RES generation to estimate residual load curves on a monthly basis, which are then inputted in a WASP-IV model of the Greek power system. We find that the decarbonisation effort, with the rapid entry of RES and the abolishment of the grandfathering of CO 2 allowances, will radically transform the Greek electricity sector over the next 10 years, which has wide-reaching policy implications. - Highlights: ► Greece needs 8.8 to 9.3 GW additional RES installations by 2020. ► RES capacity credit varies between 12.2% and 15.3%, depending on interconnections. ► Without institutional changes, the reserve requirements will be more than double. ► New CCGT installed capacity will probably exceed the cost-efficient level. ► Competitive pressures should be introduced in segments other than day-ahead market.

  13. Large-scale irregularities of the winter polar topside ionosphere according to data from Swarm satellites

    Science.gov (United States)

    Lukianova, R. Yu.; Bogoutdinov, Sh. R.

    2017-11-01

    An analysis of the electron density measurements ( Ne) along the flyby trajectories over the high-latitude region of the Northern Hemisphere under winter conditions in 2014 and 2016 has shown that the main large-scale structure observed by Swarm satellites is the tongue of ionization (TOI). At the maximum of the solar cycle ( F 10.7 = 160), the average value of Ne in the TOI region at an altitude of 500 km was 8 × 104 cm-3. Two years later, at F 10.7 = 100, Ne 5 × 104 cm-3 and Ne 2.5 × 104 cm-3 were observed at altitudes of 470 and 530 km, respectively. During the dominance of the azimuthal component of the interplanetary magnetic field, the TOI has been observed mainly on the dawn or dusk side depending on the sign of B y . Simultaneous observations of the convective plasma drift velocity in the polar cap show the transpolar flow drift to the dawn ( B y y generation of large-scale irregularities in the polar ionosphere.

  14. Inverting Gravitational Lenses

    Science.gov (United States)

    Newbury, P. R.; Spiteri, R. J.

    2002-02-01

    Gravitational lensing provides a powerful tool to study a number of fundamental questions in astrophysics. Fortuitously, one can begin to explore some non-trivial issues associated with this phenomenon without a lot of very sophisticated mathematics, making an elementary treatment of this topic tractable even to senior undergraduates. In this paper, we give a relatively self-contained outline of the basic concepts and mathematics behind gravitational lensing as a recent and exciting topic for courses in mathematical modeling or scientific computing. To this end, we have designed and made available some interactive software to aid in the simulation and inversion of gravitational lenses in a classroom setting.

  15. Stress-Detection Lenses

    Science.gov (United States)

    1996-01-01

    An Ames Research Center scientist invented an infrared lens used in sunglasses to filter out ultraviolet rays. This product finds its origins in research for military enemy detection. Through a Space Act Agreement, Optical Sales Corporation introduced the Hawkeye Lenses not only as sunglasses but as plant stress detection lenses. The lenses enhance the stressed part of the leaf, which has less chlorophyll than healthy leaves, through dyes that filter out certain wavelengths of light. Plant stress is visible earlier, at a stage when something can be done to save the plants.

  16. Lensing bias to CMB polarization measurements of compensated isocurvature perturbations

    Science.gov (United States)

    Heinrich, Chen

    2018-01-01

    Compensated isocurvature perturbations (CIPs) are opposite spatial fluctuations in the baryon and dark matter (DM) densities. They arise in the curvaton model and some models of baryogenesis. While the gravitational effects of baryon fluctuations are compensated by those of DM, leaving no observable impacts on the cosmic microwave background (CMB) at first order, they modulate the sound horizon at recombination, thereby correlating CMB anisotropies at different multipoles. As a result, CIPs can be reconstructed using quadratic estimators similarly to CMB detection of gravitational lensing. Because of these similarities, however, the CIP estimators are biased with lensing contributions that must be subtracted. These lensing contributions for CMB polarization measurement of CIPs are found to roughly triple the noise power of the total CIP estimator on large scales. In addition, the cross power with temperature and E -mode polarization are contaminated by lensing-ISW (integrated Sachs-Wolfe) correlations and reionization-lensing correlations respectively. For a cosmic-variance-limited temperature and polarization experiment measuring out to multipoles lmax=2500 , the lensing noise raises the detection threshold by a factor of 1.5, leaving a 2.7 σ detection possible for the maximal CIP signal in the curvaton model.

  17. The epidemiology of microbial keratitis with silicone hydrogel contact lenses.

    Science.gov (United States)

    Stapleton, Fiona; Keay, Lisa; Edwards, Katie; Holden, Brien

    2013-01-01

    It was widely anticipated that after the introduction of silicone hydrogel lenses, the risk of microbial keratitis would be lower than with hydrogel lenses because of the reduction in hypoxic effects on the corneal epithelium. Large-scale epidemiological studies have confirmed that the absolute and relative risk of microbial keratitis is unchanged with overnight use of silicone hydrogel materials. The key findings include the following: (1) The risk of infection with 30 nights of silicone hydrogel use is equivalent to 6 nights of hydrogel extended wear; (2) Occasional overnight lens use is associated with a greater risk than daily lens use; (3) The rate of vision loss due to corneal infection with silicone hydrogel contact lenses is similar to that seen in hydrogel lenses; (4) The spectrum of causative organisms is similar to that seen in hydrogel lenses, and the material type does not impact the corneal location of presumed microbial keratitis; and (5) Modifiable risk factors for infection include overnight lens use, the degree of exposure, failing to wash hands before lens handling, and storage case hygiene practice. The lack of change in the absolute risk of disease would suggest that exposure to large number of pathogenic organisms can overcome any advantages obtained from eliminating the hypoxic effects of contact lenses. Epidemiological studies remain important in the assessment of new materials and modalities. Consideration of an early adopter effect with studies involving new materials and modalities and further investigation of the impact of second-generation silicone hydrogel materials is warranted.

  18. Large-scale recumbent isoclinal folds in the footwall of the West Cycladic Detachment System (Greece)

    Science.gov (United States)

    Rice, A. Hugh N.; Grasemann, Bernhard

    2017-04-01

    quartz layers, only cropping out above the Flabouria Lithodeme south of Aghios Dimitrios, directly below the WCDS; (3) Mavrianou Lithodeme - mylonitic QCWM schists with lenses of BGC mylonites cropping out above the Flabouria Lithodeme along the west coast, 2.5-9 km N of Aghios Dimitrios. Thus, offshore in the 2.5 km north of Aghios Dimitrios, the Mavrianou Lithodeme is 'replaced' by the Rizou Lithodeme; these units are lithologically quite distinct. However, mylonitic outcrops of the Petroussa Lithodeme are very similar to the Mavrianou Lithodeme mylonites. A tentative structural solution is to argue that the Mavrianou Lithodeme is a large-scale isoclinal fold repetition of the Petroussa Lithodeme; southwards the fold amplitude decreases and dies out offshore north of Aghios Dimitrios; repetition of other lithodemes supports this solution. The origin of the fold is not known but the lithological repetition persists towards the central part of the island, where the transition from ENE-WSW trending Eocene exhumation deformation has not been fully overprinted by NNE-SSW trending Miocene deformation. Hence the fold may have formed as a large-scale structure during syn-orogenic Eocene exhumation of the Cycladic Blueschist Nappe and then been flattened and rotated during Miocene deformation in the footwall of the West Cycladic Detachment System.

  19. ACCRETION DISKS WITH A LARGE SCALE MAGNETIC FIELD AROUND BLACK HOLES

    Directory of Open Access Journals (Sweden)

    Gennady Bisnovatyi-Kogan

    2013-12-01

    Full Text Available We consider accretion disks around black holes at high luminosity, and the problem of the formation of a large-scale magnetic field in such disks, taking into account the non-uniform vertical structure of the disk. The structure of advective accretion disks is investigated, and conditions for the formation of optically thin regions in central parts of the accretion disk are found. The high electrical conductivity of the outer layers of the disk prevents outward diffusion of the magnetic field. This implies a stationary state with a strong magnetic field in the inner parts of the accretion disk close to the black hole, and zero radial velocity at the surface of the disk. The problem of jet collimation by magneto-torsion oscillations is investigated.

  20. Dark matter, long-range forces, and large-scale structure

    Science.gov (United States)

    Gradwohl, Ben-Ami; Frieman, Joshua A.

    1992-01-01

    If the dark matter in galaxies and clusters is nonbaryonic, it can interact with additional long-range fields that are invisible to experimental tests of the equivalence principle. We discuss the astrophysical and cosmological implications of a long-range force coupled only to the dark matter and find rather tight constraints on its strength. If the force is repulsive (attractive), the masses of galaxy groups and clusters (and the mean density of the universe inferred from them) have been systematically underestimated (overestimated). We explore the consequent effects on the two-point correlation function, large-scale velocity flows, and microwave background anisotropies, for models with initial scale-invariant adiabatic perturbations and cold dark matter.

  1. Imaging the Chicxulub central crater zone from large scale seismic acoustic wave propagation and gravity modeling

    Science.gov (United States)

    Fucugauchi, J. U.; Ortiz-Aleman, C.; Martin, R.

    2017-12-01

    Large complex craters are characterized by central uplifts that represent large-scale differential movement of deep basement from the transient cavity. Here we investigate the central sector of the large multiring Chicxulub crater, which has been surveyed by an array of marine, aerial and land-borne geophysical methods. Despite high contrasts in physical properties,contrasting results for the central uplift have been obtained, with seismic reflection surveys showing lack of resolution in the central zone. We develop an integrated seismic and gravity model for the main structural elements, imaging the central basement uplift and melt and breccia units. The 3-D velocity model built from interpolation of seismic data is validated using perfectly matched layer seismic acoustic wave propagation modeling, optimized at grazing incidence using shift in the frequency domain. Modeling shows significant lack of illumination in the central sector, masking presence of the central uplift. Seismic energy remains trapped in an upper low velocity zone corresponding to the sedimentary infill, melt/breccias and surrounding faulted blocks. After conversion of seismic velocities into a volume of density values, we use massive parallel forward gravity modeling to constrain the size and shape of the central uplift that lies at 4.5 km depth, providing a high-resolution image of crater structure.The Bouguer anomaly and gravity response of modeled units show asymmetries, corresponding to the crater structure and distribution of post-impact carbonates, breccias, melt and target sediments

  2. Impact of suspended coal dusts on methane deflagration properties in a large-scale straight duct.

    Science.gov (United States)

    Ajrash, Mohammed J; Zanganeh, Jafar; Moghtaderi, Behdad

    2017-09-15

    Knowledge about flame deflagrations in mixtures of methane and diluted coal dust assists in the prediction of fires and explosions, and in the design of adequate protective systems. This vital lack of information on the role of hybrid mixtures (methane/coal dust) is covered in this work by employing a novel Large-Scale Straight Duct (LSSD) designed specifically for this purpose. The hybrid fuel was injected along the first 8m of the 30m long LSSD. The results revealed that a 30gm -3 coal dust concentration boosted the flame travel distance, from 6.5m to 28.5m, and increased the over pressure rise profile to 0.135bar. The over pressure rise (OPR), pressure wave velocity, flame intensity and the flame velocity were significantly boosted along the LSSD in the presence of 10gm -3 or 30gm -3 coal dust concentrations in the methane flame deflagrations. Finally, the high speed camera showed that the presence of the coal dust enhanced the turbulence in the front flame. Consequently, the pressure wave and flame velocities were both increased when a 10gm -3 coal dust concentration coexisted with a 9.5% methane concentration in the deflagration. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Flame deflagration in side-on vented detonation tubes: A large scale study.

    Science.gov (United States)

    Ajrash, Mohammed J; Zanganeh, Jafar; Moghtaderi, Behdad

    2018-03-05

    Venting is often used in process industries to reduce the possibility of dangerous rises in pressure levels and the severity of explosions. To date, the effectiveness of side-on venting on methane flame deflagration in large scale operations has not been clearly addressed. This work explicitly investigates the influences of side-on venting on varied methane flame deflagration concentrations in a 30m long Detonation Tube (DT). corresponding to this study prove the existence of a significant correlation between the fire and explosion driving parameters such as pressure rise and flame propagation velocity with the vent location. It observed venting the explosion at distance between 6.5m and 20.5m from the ignition source resulted in reducing the explosion total pressure by about 33% to 56%. For methane concentration of 7.5% the dynamic and static pressures reduced by about 66% and 33%, respectively. The reduced pressure observed to decelerate the flame velocity by about 70%. Significant pressure rise and flame deflagration velocity reductions were observed in both upstream and downstream of the DT corresponding to the location of the vent. For high methane concentrations vacuum effect observed to drawback the flame into the vent and trigger the secondary pressure rise. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates

    Science.gov (United States)

    Barberis, Lucas; Peruani, Fernando

    2016-12-01

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  5. Anisotropic diffusion across an external magnetic field and large-scale fluctuations in magnetized plasmas.

    Science.gov (United States)

    Holod, I; Zagorodny, A; Weiland, J

    2005-04-01

    The problem of random motion of charged particles in an external magnetic field is studied under the assumption that the Langevin sources produce anisotropic diffusion in velocity space and the friction force is dependent on the direction of particle motion. It is shown that in the case under consideration, the kinetic equation describing particle transitions in phase space is reduced to the equation with a Fokker-Planck collision term in the general form (non-isotropic friction coefficient and nonzero off-diagonal elements of the diffusion tensor in the velocity space). The solution of such an equation has been obtained and the explicit form of the transition probability is found. Using the obtained transition probability, the mean-square particle displacements in configuration and velocity space were calculated and compared with the results of numerical simulations, showing good agreement. The obtained results are used to generalize the theory of large-scale fluctuations in plasmas to the case of anisotropic diffusion across an external magnetic field. Such diffusion is expected to be observed in the case of an anisotropic k spectrum of fluctuations generating random particle motion (for example, in the case of drift-wave turbulence).

  6. Gravitational lensing of quasars

    CERN Document Server

    Eigenbrod, Alexander

    2013-01-01

    The universe, in all its richness, diversity and complexity, is populated by a myriad of intriguing celestial objects. Among the most exotic of them are gravitationally lensed quasars. A quasar is an extremely bright nucleus of a galaxy, and when such an object is gravitationally lensed, multiple images of the quasar are produced – this phenomenon of cosmic mirage can provide invaluable insights on burning questions, such as the nature of dark matter and dark energy. After presenting the basics of modern cosmology, the book describes active galactic nuclei, the theory of gravitational lensing, and presents a particular numerical technique to improve the resolution of astronomical data. The book then enters the heart of the subject with the description of important applications of gravitational lensing of quasars, such as the measurement of the famous Hubble constant, the determination of the dark matter distribution in galaxies, and the observation of the mysterious inner parts of quasars with much higher r...

  7. SPATIALLY RESOLVED GAS KINEMATICS WITHIN A Lyα NEBULA: EVIDENCE FOR LARGE-SCALE ROTATION

    Energy Technology Data Exchange (ETDEWEB)

    Prescott, Moire K. M. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Martin, Crystal L. [Department of Physics, Broida Hall, Mail Code 9530, University of California, Santa Barbara, CA 93106 (United States); Dey, Arjun, E-mail: mkmprescott@dark-cosmology.dk [National Optical Astronomy Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States)

    2015-01-20

    We use spatially extended measurements of Lyα as well as less optically thick emission lines from an ≈80 kpc Lyα nebula at z ≈ 1.67 to assess the role of resonant scattering and to disentangle kinematic signatures from Lyα radiative transfer effects. We find that the Lyα, C IV, He II, and C III] emission lines all tell a similar story in this system, and that the kinematics are broadly consistent with large-scale rotation. First, the observed surface brightness profiles are similar in extent in all four lines, strongly favoring a picture in which the Lyα photons are produced in situ instead of being resonantly scattered from a central source. Second, we see low kinematic offsets between Lyα and the less optically thick He II line (∼100-200 km s{sup –1}), providing further support for the argument that the Lyα and other emission lines are all being produced within the spatially extended gas. Finally, the full velocity field of the system shows coherent velocity shear in all emission lines: ≈500 km s{sup –1} over the central ≈50 kpc of the nebula. The kinematic profiles are broadly consistent with large-scale rotation in a gas disk that is at least partially stable against collapse. These observations suggest that the Lyα nebula represents accreting material that is illuminated by an offset, hidden active galactic nucleus or distributed star formation, and that is undergoing rotation in a clumpy and turbulent gas disk. With an implied mass of M(

  8. Updating Geospatial Data from Large Scale Data Sources

    Science.gov (United States)

    Zhao, R.; Chen, J.; Wang, D.; Shang, Y.; Wang, Z.; Li, X.; Ai, T.

    2011-08-01

    In the past decades, many geospatial databases have been established at national, regional and municipal levels over the world. Nowadays, it has been widely recognized that how to update these established geo-spatial database and keep them up to date is most critical for the value of geo-spatial database. So, more and more efforts have been devoted to the continuous updating of these geospatial databases. Currently, there exist two main types of methods for Geo-spatial database updating: directly updating with remote sensing images or field surveying materials, and indirectly updating with other updated data result such as larger scale newly updated data. The former method is the basis because the update data sources in the two methods finally root from field surveying and remote sensing. The later method is often more economical and faster than the former. Therefore, after the larger scale database is updated, the smaller scale database should be updated correspondingly in order to keep the consistency of multi-scale geo-spatial database. In this situation, it is very reasonable to apply map generalization technology into the process of geo-spatial database updating. The latter is recognized as one of most promising methods of geo-spatial database updating, especially in collaborative updating environment in terms of map scale, i.e , different scale database are produced and maintained separately by different level organizations such as in China. This paper is focused on applying digital map generalization into the updating of geo-spatial database from large scale in the collaborative updating environment for SDI. The requirements of the application of map generalization into spatial database updating are analyzed firstly. A brief review on geospatial data updating based digital map generalization is then given. Based on the requirements analysis and review, we analyze the key factors for implementing updating geospatial data from large scale including technical

  9. Alignment between galaxies and large-scale structure

    International Nuclear Information System (INIS)

    Faltenbacher, A.; Li Cheng; White, Simon D. M.; Jing, Yi-Peng; Mao Shude; Wang Jie

    2009-01-01

    Based on the Sloan Digital Sky Survey DR6 (SDSS) and the Millennium Simulation (MS), we investigate the alignment between galaxies and large-scale structure. For this purpose, we develop two new statistical tools, namely the alignment correlation function and the cos(2θ)-statistic. The former is a two-dimensional extension of the traditional two-point correlation function and the latter is related to the ellipticity correlation function used for cosmic shear measurements. Both are based on the cross correlation between a sample of galaxies with orientations and a reference sample which represents the large-scale structure. We apply the new statistics to the SDSS galaxy catalog. The alignment correlation function reveals an overabundance of reference galaxies along the major axes of red, luminous (L ∼ * ) galaxies out to projected separations of 60 h- 1 Mpc. The signal increases with central galaxy luminosity. No alignment signal is detected for blue galaxies. The cos(2θ)-statistic yields very similar results. Starting from a MS semi-analytic galaxy catalog, we assign an orientation to each red, luminous and central galaxy, based on that of the central region of the host halo (with size similar to that of the stellar galaxy). As an alternative, we use the orientation of the host halo itself. We find a mean projected misalignment between a halo and its central region of ∼ 25 deg. The misalignment decreases slightly with increasing luminosity of the central galaxy. Using the orientations and luminosities of the semi-analytic galaxies, we repeat our alignment analysis on mock surveys of the MS. Agreement with the SDSS results is good if the central orientations are used. Predictions using the halo orientations as proxies for central galaxy orientations overestimate the observed alignment by more than a factor of 2. Finally, the large volume of the MS allows us to generate a two-dimensional map of the alignment correlation function, which shows the reference

  10. Reviving large-scale projects; La relance des grands chantiers

    Energy Technology Data Exchange (ETDEWEB)

    Desiront, A.

    2003-06-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to

  11. Food security through large scale investments in agriculture

    Science.gov (United States)

    Rulli, M.; D'Odorico, P.

    2013-12-01

    Most of the human appropriation of freshwater resources is for food production. There is some concern that in the near future the finite freshwater resources available on Earth might not be sufficient to meet the increasing human demand for agricultural products. In the late 1700s Malthus argued that in the long run the humanity would not have enough resources to feed itself. Malthus' analysis, however, did not account for the emergence of technological innovations that could increase the rate of food production. The modern and contemporary history has seen at least three major technological advances that have increased humans' access to food, namely, the industrial revolution, the green revolution, and the intensification of global trade. Here we argue that a fourth revolution has just started to happen. It involves foreign direct investments in agriculture, which intensify the crop yields of potentially highly productive agricultural lands by introducing the use of more modern technologies. The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions for commercial farming will bring the technology required to close the existing yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of verified land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with large scale land acquisitions. We

  12. Large-Scale Spray Releases: Additional Aerosol Test Results

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used

  13. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  14. Weakly oval electron lense

    International Nuclear Information System (INIS)

    Daumenov, T.D.; Alizarovskaya, I.M.; Khizirova, M.A.

    2001-01-01

    The method of the weakly oval electrical field getting generated by the axially-symmetrical field is shown. Such system may be designed with help of the cylindric form coaxial electrodes with the built-in quadrupole duplet. The singularity of the indicated weakly oval lense consists of that it provides the conducting both mechanical and electronic adjustment. Such lense can be useful for elimination of the near-axis astigmatism in the electron-optical system

  15. Locating inefficient links in a large-scale transportation network

    Science.gov (United States)

    Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu

    2015-02-01

    Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.

  16. Open TG-GATEs: a large-scale toxicogenomics database

    Science.gov (United States)

    Igarashi, Yoshinobu; Nakatsu, Noriyuki; Yamashita, Tomoya; Ono, Atsushi; Ohno, Yasuo; Urushidani, Tetsuro; Yamada, Hiroshi

    2015-01-01

    Toxicogenomics focuses on assessing the safety of compounds using gene expression profiles. Gene expression signatures from large toxicogenomics databases are expected to perform better than small databases in identifying biomarkers for the prediction and evaluation of drug safety based on a compound's toxicological mechanisms in animal target organs. Over the past 10 years, the Japanese Toxicogenomics Project consortium (TGP) has been developing a large-scale toxicogenomics database consisting of data from 170 compounds (mostly drugs) with the aim of improving and enhancing drug safety assessment. Most of the data generated by the project (e.g. gene expression, pathology, lot number) are freely available to the public via Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System). Here, we provide a comprehensive overview of the database, including both gene expression data and metadata, with a description of experimental conditions and procedures used to generate the database. Open TG-GATEs is available from http://toxico.nibio.go.jp/english/index.html. PMID:25313160

  17. Large scale photovoltaic field trials. Second technical report: monitoring phase

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-09-15

    This report provides an update on the Large-Scale Building Integrated Photovoltaic Field Trials (LS-BIPV FT) programme commissioned by the Department of Trade and Industry (Department for Business, Enterprise and Industry; BERR). It provides detailed profiles of the 12 projects making up this programme, which is part of the UK programme on photovoltaics and has run in parallel with the Domestic Field Trial. These field trials aim to record the experience and use the lessons learnt to raise awareness of, and confidence in, the technology and increase UK capabilities. The projects involved: the visitor centre at the Gaia Energy Centre in Cornwall; a community church hall in London; council offices in West Oxfordshire; a sports science centre at Gloucester University; the visitor centre at Cotswold Water Park; the headquarters of the Insolvency Service; a Welsh Development Agency building; an athletics centre in Birmingham; a research facility at the University of East Anglia; a primary school in Belfast; and Barnstable civic centre in Devon. The report describes the aims of the field trials, monitoring issues, performance, observations and trends, lessons learnt and the results of occupancy surveys.

  18. Periodic cells for large-scale problem initialization

    Directory of Open Access Journals (Sweden)

    Ciantia Matteo O.

    2017-01-01

    Full Text Available In geotechnical applications the success of the discrete element method (DEM in simulating fundamental aspects of soil behaviour has increased the interest in applications for direct simulation of engineering scale boundary value problems (BVP’s. The main problem is that the method remains relatively expensive in terms of computational cost. A non-negligible part of that cost is related to specimen creation and initialization. As the response of soil is strongly dependant on its initial state (stress and porosity, attaining a specified initial state is a crucial part of a DEM model. Different procedures for controlled sample generation are available. However, applying the existing REV-oriented initialization procedures to such models is inefficient in terms of computational cost and challenging in terms of sample homogeneity. In this work a simple but efficient procedure to initialize large-scale DEM models is presented. Periodic cells are first generated with a sufficient number of particles matching a desired particle size distribution (PSD. The cells are then equilibrated at low-level isotropic stress at target porosity. Once the cell is in equilibrium, it is replicated in space in order to fill the model domain. After the domain is thus filled a few mechanical cycles are needed to re-equilibrate the large domain. The result is a large, homogeneous sample, equilibrated under prescribed stress at the desired porosity. The method is applicable to both isotropic and anisotropic initial stress states, with stress magnitude varying in space.

  19. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K. [eds.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  20. Large-Scale Seismic Test Program at Hualien, Taiwan

    International Nuclear Information System (INIS)

    Tang, H.T.; Graves, H.L.; Yeh, Y.S.

    1991-01-01

    The Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, is a follow-on to the soil-structure interaction (SSI) experiments at Lotung, Taiwan. The planned SSI studies will be performed at a stiff soil site in Hualien, Taiwan, that historically has had slightly more destructive earthquakes in the past than Lotung. The objectives of the LSST project is as follows: To obtain earthquake-induced SSI data at a stiff soil site having similar prototypical nuclear power plant soil conditions. To confirm the findings and methodologies validated against the Lotung soft soil SSI data for prototypical plant condition applications. To further validate the technical basis of realistic SSI analysis approaches. To further support the resolution of USI A-40 Seismic Design Criteria issue. These objectives will be accomplished through an integrated and carefully planned experimental program consisting of: soil characterization, test model design and field construction, instrumentation layout and deployment, in-situ geophysical information collection, forced vibration test, and synthesis of results and findings. The LSST is a joint effort among many interested parties. EPRI and Taipower are the organizers of the program and have the lead in planning and managing the program

  1. Boundary element method solution for large scale cathodic protection problems

    Science.gov (United States)

    Rodopoulos, D. C.; Gortsas, T. V.; Tsinopoulos, S. V.; Polyzos, D.

    2017-12-01

    Cathodic protection techniques are widely used for avoiding corrosion sequences in offshore structures. The Boundary Element Method (BEM) is an ideal method for solving such problems because requires only the meshing of the boundary and not the whole domain of the electrolyte as the Finite Element Method does. This advantage becomes more pronounced in cathodic protection systems since electrochemical reactions occur mainly on the surface of the metallic structure. The present work aims to solve numerically a sacrificial cathodic protection problem for a large offshore platform. The solution of that large-scale problem is accomplished by means of “PITHIA Software” a BEM package enhanced by Hierarchical Matrices (HM) and Adaptive Cross Approximation (ACA) techniques that accelerate drastically the computations and reduce memory requirements. The nonlinear polarization curves for steel and aluminium in seawater are employed as boundary condition for the under protection metallic surfaces and aluminium anodes, respectively. The potential as well as the current density at all the surface of the platform are effectively evaluated and presented.

  2. Experimental study on dynamic behavior of large scale foundation, 1

    International Nuclear Information System (INIS)

    Hanada, Kazufumi; Sawada, Yoshihiro; Esashi, Yasuyuki; Ueshima, Teruyuki; Nakamura, Hideharu

    1983-01-01

    The large-sized, high performance vibrating table in the Nuclear Power Engineering Test Center is installed on a large-scale concrete foundation of length 90.9 m, width 44.8 m and maximum thickness 21 m, weighing 150,000 tons. Through the experimental study on the behavior of the foundation, which is set on gravel ground, useful information should be obtained on the siting of a nuclear power plant on the Quaternary stratum ground. The objective of research is to grasp the vibration characteristics of the foundation during the vibration of the table to evaluate the interaction between the foundation and the ground, and to evaluate an analytical method for numerically simulating the vibration behavior. In the present study, the vibration behavior of the foundation was clarified by measurement, and in order to predict the vibration behavior, the semi-infinite theory of elasticity was applied. The accuracy of this analytical method was demonstrated by comparison with the measured results. (Mori, K.)

  3. Enabling High Performance Large Scale Dense Problems through KBLAS

    KAUST Repository

    Abdelfattah, Ahmad

    2014-05-04

    KBLAS (KAUST BLAS) is a small library that provides highly optimized BLAS routines on systems accelerated with GPUs. KBLAS is entirely written in CUDA C, and targets NVIDIA GPUs with compute capability 2.0 (Fermi) or higher. The current focus is on level-2 BLAS routines, namely the general matrix vector multiplication (GEMV) kernel, and the symmetric/hermitian matrix vector multiplication (SYMV/HEMV) kernel. KBLAS provides these two kernels in all four precisions (s, d, c, and z), with support to multi-GPU systems. Through advanced optimization techniques that target latency hiding and pushing memory bandwidth to the limit, KBLAS outperforms state-of-the-art kernels by 20-90% improvement. Competitors include CUBLAS-5.5, MAGMABLAS-1.4.0, and CULAR17. The SYMV/HEMV kernel from KBLAS has been adopted by NVIDIA, and should appear in CUBLAS-6.0. KBLAS has been used in large scale simulations of multi-object adaptive optics.

  4. Logistics of large scale commercial IVF embryo production.

    Science.gov (United States)

    Blondin, P

    2016-01-01

    The use of IVF in agriculture is growing worldwide. This can be explained by the development of better IVF media and techniques, development of sexed semen and the recent introduction of bovine genomics on farms. Being able to perform IVF on a large scale, with multiple on-farm experts to perform ovum pick-up and IVF laboratories capable of handling large volumes in a consistent and sustainable way, remains a huge challenge. To be successful, there has to be a partnership between veterinarians on farms, embryologists in the laboratory and animal owners. Farmers must understand the limits of what IVF can or cannot do under different conditions; veterinarians must manage expectations of farmers once strategies have been developed regarding potential donors; and embryologists must maintain fluent communication with both groups to make sure that objectives are met within predetermined budgets. The logistics of such operations can be very overwhelming, but the return can be considerable if done right. The present mini review describes how such operations can become a reality, with an emphasis on the different aspects that must be considered by all parties.

  5. Episodic memory in aspects of large-scale brain networks

    Directory of Open Access Journals (Sweden)

    Woorim eJeong

    2015-08-01

    Full Text Available Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network. Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network. Altered patterns of functional connectivity among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment.

  6. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer

    2017-11-09

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  7. Static micromixers based on large-scale industrial mixer geometry.

    Science.gov (United States)

    Bertsch, A; Heimgartner, S; Cousseau, P; Renaud, P

    2001-09-01

    Mixing liquids at the micro-scale is difficult because the low Reynolds numbers in microchannels and in microreactors prohibit the use of conventional mixing techniques based on mechanical actuators and induce turbulence. Static mixers can be used to solve this mixing problem. This paper presents micromixers with geometries very close to conventional large-scale static mixers used in the chemical and food-processing industry. Two kinds of geometries have been studied. The first type is composed of a series of stationary rigid elements that form intersecting channels to split, rearrange and combine component streams. The second type is composed of a series of short helix elements arranged in pairs, each pair comprised of a right-handed and left-handed element arranged alternately in a pipe. Micromixers of both types have been designed by CAD and manufactured with the integral microstereolithography process, a new microfabrication technique that allows the manufacturing of complex three-dimensional objects in polymers. The realized mixers have been tested experimentally. Numerical simulations of these micromixers using the computational fluid dynamics (CFD) program FLUENT are used to evaluate the mixing efficiency. With a low pressure drop and good mixing efficiency these truly three-dimensional micromixers can be used for mixing of reactants or liquids containing cells in many microTAS applications.

  8. Countercurrent tangential chromatography for large-scale protein purification.

    Science.gov (United States)

    Shinkazh, Oleg; Kanani, Dharmesh; Barth, Morgan; Long, Matthew; Hussain, Daniar; Zydney, Andrew L

    2011-03-01

    Recent advances in cell culture technology have created significant pressure on the downstream purification process, leading to a "downstream bottleneck" in the production of recombinant therapeutic proteins for the treatment of cancer, genetic disorders, and cardiovascular disease. Countercurrent tangential chromatography overcomes many of the limitations of conventional column chromatography by having the resin (in the form of a slurry) flow through a series of static mixers and hollow fiber membrane modules. The buffers used in the binding, washing, and elution steps flow countercurrent to the resin, enabling high-resolution separations while reducing the amount of buffer needed for protein purification. The results obtained in this study provide the first experimental demonstration of the feasibility of using countercurrent tangential chromatography for the separation of a model protein mixture containing bovine serum albumin and myoglobin using a commercially available anion exchange resin. Batch uptake/desorption experiments were used in combination with critical flux data for the hollow fiber filters to design the countercurrent tangential chromatography system. A two-stage batch separation yielded the purified target protein at >99% purity with 94% recovery. The results clearly demonstrate the potential of using countercurrent tangential chromatography for the large-scale purification of therapeutic proteins. Copyright © 2010 Wiley Periodicals, Inc.

  9. Fast large-scale object retrieval with binary quantization

    Science.gov (United States)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  10. FRVT 2006 and ICE 2006 large-scale experimental results.

    Science.gov (United States)

    Phillips, P Jonathon; Scruggs, W Todd; O'Toole, Alice J; Flynn, Patrick J; Bowyer, Kevin W; Schott, Cathy L; Sharpe, Matthew

    2010-05-01

    This paper describes the large-scale experimental results from the Face Recognition Vendor Test (FRVT) 2006 and the Iris Challenge Evaluation (ICE) 2006. The FRVT 2006 looked at recognition from high-resolution still frontal face images and 3D face images, and measured performance for still frontal face images taken under controlled and uncontrolled illumination. The ICE 2006 evaluation reported verification performance for both left and right irises. The images in the ICE 2006 intentionally represent a broader range of quality than the ICE 2006 sensor would normally acquire. This includes images that did not pass the quality control software embedded in the sensor. The FRVT 2006 results from controlled still and 3D images document at least an order-of-magnitude improvement in recognition performance over the FRVT 2002. The FRVT 2006 and the ICE 2006 compared recognition performance from high-resolution still frontal face images, 3D face images, and the single-iris images. On the FRVT 2006 and the ICE 2006 data sets, recognition performance was comparable for high-resolution frontal face, 3D face, and the iris images. In an experiment comparing human and algorithms on matching face identity across changes in illumination on frontal face images, the best performing algorithms were more accurate than humans on unfamiliar faces.

  11. Punishment sustains large-scale cooperation in prestate warfare

    Science.gov (United States)

    Mathew, Sarah; Boyd, Robert

    2011-01-01

    Understanding cooperation and punishment in small-scale societies is crucial for explaining the origins of human cooperation. We studied warfare among the Turkana, a politically uncentralized, egalitarian, nomadic pastoral society in East Africa. Based on a representative sample of 88 recent raids, we show that the Turkana sustain costly cooperation in combat at a remarkably large scale, at least in part, through punishment of free-riders. Raiding parties comprised several hundred warriors and participants are not kin or day-to-day interactants. Warriors incur substantial risk of death and produce collective benefits. Cowardice and desertions occur, and are punished by community-imposed sanctions, including collective corporal punishment and fines. Furthermore, Turkana norms governing warfare benefit the ethnolinguistic group, a population of a half-million people, at the expense of smaller social groupings. These results challenge current views that punishment is unimportant in small-scale societies and that human cooperation evolved in small groups of kin and familiar individuals. Instead, these results suggest that cooperation at the larger scale of ethnolinguistic units enforced by third-party sanctions could have a deep evolutionary history in the human species. PMID:21670285

  12. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  13. Large-scale structure phenomenology of viable Horndeski theories

    Science.gov (United States)

    Peirone, Simone; Koyama, Kazuya; Pogosian, Levon; Raveri, Marco; Silvestri, Alessandra

    2018-02-01

    Phenomenological functions Σ and μ , also known as Glight/G and Gmatter/G , are commonly used to parametrize modifications of the growth of large-scale structure in alternative theories of gravity. We study the values these functions can take in Horndeski theories, i.e., the class of scalar-tensor theories with second order equations of motion. We restrict our attention to models that are in broad agreement with tests of gravity and the observed cosmic expansion history. In particular, we require the speed of gravity to be equal to the speed of light today, as required by the recent detection of gravitational waves and electromagnetic emission from a binary neutron star merger. We examine the correlations between the values of Σ and μ analytically within the quasistatic approximation and numerically by sampling the space of allowed solutions. We confirm that the conjecture made in [L. Pogosian and A. Silvestri, Phys. Rev. D 94, 104014 (2016), 10.1103/PhysRevD.94.104014], that (Σ -1 )(μ -1 )≥0 in viable Horndeski theories, holds very well. Along with that, we check the validity of the quasistatic approximation within different corners of Horndeski theory. Our results show that, even with the tight bound on the present-day speed of gravitational waves, there is room within Horndeski theories for nontrivial signatures of modified gravity at the level of linear perturbations.

  14. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    Energy Technology Data Exchange (ETDEWEB)

    Gabert, Kasimir [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burns, Ian [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elliott, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kallaher, Jenna [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vail, Adam [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model, either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.

  15. Large-Scale Candidate Gene Analysis of HDL Particle Features

    Science.gov (United States)

    Kaess, Bernhard M.; Tomaszewski, Maciej; Braund, Peter S.; Stark, Klaus; Rafelt, Suzanne; Fischer, Marcus; Hardwick, Robert; Nelson, Christopher P.; Debiec, Radoslaw; Huber, Fritz; Kremer, Werner; Kalbitzer, Hans Robert; Rose, Lynda M.; Chasman, Daniel I.; Hopewell, Jemma; Clarke, Robert; Burton, Paul R.; Tobin, Martin D.

    2011-01-01

    Background HDL cholesterol (HDL-C) is an established marker of cardiovascular risk with significant genetic determination. However, HDL particles are not homogenous, and refined HDL phenotyping may improve insight into regulation of HDL metabolism. We therefore assessed HDL particles by NMR spectroscopy and conducted a large-scale candidate gene association analysis. Methodology/Principal Findings We measured plasma HDL-C and determined mean HDL particle size and particle number by NMR spectroscopy in 2024 individuals from 512 British Caucasian families. Genotypes were 49,094 SNPs in >2,100 cardiometabolic candidate genes/loci as represented on the HumanCVD BeadChip version 2. False discovery rates (FDR) were calculated to account for multiple testing. Analyses on classical HDL-C revealed significant associations (FDRparticle size yielded additional associations in LIPC (hepatic lipase; rs261332: p = 6.1*10−9), PLTP (phospholipid transfer protein, rs4810479: p = 1.7*10−8) and FBLN5 (fibulin-5; rs2246416: p = 6.2*10−6). The associations of SGCD and Fibulin-5 with HDL particle size could not be replicated in PROCARDIS (n = 3,078) and/or the Women's Genome Health Study (n = 23,170). Conclusions We show that refined HDL phenotyping by NMR spectroscopy can detect known genes of HDL metabolism better than analyses on HDL-C. PMID:21283740

  16. Episodic memory in aspects of large-scale brain networks

    Science.gov (United States)

    Jeong, Woorim; Chung, Chun Kee; Kim, June Sic

    2015-01-01

    Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL) structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network (DMN). Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network (RSN). Altered patterns of functional connectivity (FC) among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment. PMID:26321939

  17. Large-scale demonstration of D ampersand D technologies

    International Nuclear Information System (INIS)

    Bhattacharyya, S.K.; Black, D.B.; Rose, R.W.

    1997-01-01

    It is becoming increasingly evident that new technologies will need to be utilized for decontamination and decommissioning (D ampersand D) activities in order to assure safe and cost effective operations. The magnitude of the international D ampersand D problem is sufficiently large in anticipated cost (100's of billions of dollars) and in elapsed time (decades), that the utilization of new technologies should lead to substantial improvements in cost and safety performance. Adoption of new technologies in the generally highly contaminated D ampersand D environments requires assurances that the technology will perform as advertised. Such assurances can be obtained from demonstrations of the technology in environments that are similar to the actual environments without being quite as contaminated and hazardous. The Large Scale Demonstration Project (LSDP) concept was designed to provide such a function. The first LSDP funded by the U.S. Department Of Energy's Environmental Management Office (EM) was on the Chicago Pile 5 (CP-5) Reactor at Argonne National Laboratory. The project, conducted by a Strategic Alliance for Environmental Restoration, has completed demonstrations of 10 D ampersand D technologies and is in the process of comparing the performance to baseline technologies. At the conclusion of the project, a catalog of performance comparisons of these technologies will be developed that will be suitable for use by future D ampersand D planners

  18. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael

    2013-01-01

    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  19. Flat-Land Large-Scale Electricity Storage (FLES

    Directory of Open Access Journals (Sweden)

    Schalij R.

    2012-10-01

    Full Text Available Growth of renewable sources requires a smarter electricity grid, integrating multiple solutions for large scale storage. Pumped storage still is the most valid option. The capacity of existing facilities is not sufficient to accommodate future renewable resources. New locations for additional pumped storage capacity are scarce. Mountainous areas mostly are remote and do not allow construction of large facilities for ecological reasons. In the Netherlands underground solutions were studied for many years. The use of (former coal mines was rejected after scientific research. Further research showed that solid rock formations below the (unstable coal layers can be harnessed to excavate the lower water reservoir for pumped storage, making an innovative underground solution possible. A complete plan was developed, with a capacity of 1400 MW (8 GWh daily output and a head of 1400 m. It is technically and economically feasible. Compared to conventional pumped storage it has significantly less impact on the environment. Less vulnerable locations are eligible. The reservoir on the surface (only one instead of two is relatively small. It offers also a solution for other European countries. The Dutch studies provide a valuable basis for new locations.

  20. Large-scale functional purification of recombinant HIV-1 capsid.

    Directory of Open Access Journals (Sweden)

    Magdeleine Hung

    Full Text Available During human immunodeficiency virus type-1 (HIV-1 virion maturation, capsid proteins undergo a major rearrangement to form a conical core that protects the viral nucleoprotein complexes. Mutations in the capsid sequence that alter the stability of the capsid core are deleterious to viral infectivity and replication. Recently, capsid assembly has become an attractive target for the development of a new generation of anti-retroviral agents. Drug screening efforts and subsequent structural and mechanistic studies require gram quantities of active, homogeneous and pure protein. Conventional means of laboratory purification of Escherichia coli expressed recombinant capsid protein rely on column chromatography steps that are not amenable to large-scale production. Here we present a function-based purification of wild-type and quadruple mutant capsid proteins, which relies on the inherent propensity of capsid protein to polymerize and depolymerize. This method does not require the packing of sizable chromatography columns and can generate double-digit gram quantities of functionally and biochemically well-behaved proteins with greater than 98% purity. We have used the purified capsid protein to characterize two known assembly inhibitors in our in-house developed polymerization assay and to measure their binding affinities. Our capsid purification procedure provides a robust method for purifying large quantities of a key protein in the HIV-1 life cycle, facilitating identification of the next generation anti-HIV agents.

  1. Development of a Large Scale, High Speed Wheel Test Facility

    Science.gov (United States)

    Kondoleon, Anthony; Seltzer, Donald; Thornton, Richard; Thompson, Marc

    1996-01-01

    Draper Laboratory, with its internal research and development budget, has for the past two years been funding a joint effort with the Massachusetts Institute of Technology (MIT) for the development of a large scale, high speed wheel test facility. This facility was developed to perform experiments and carry out evaluations on levitation and propulsion designs for MagLev systems currently under consideration. The facility was developed to rotate a large (2 meter) wheel which could operate with peripheral speeds of greater than 100 meters/second. The rim of the wheel was constructed of a non-magnetic, non-conductive composite material to avoid the generation of errors from spurious forces. A sensor package containing a multi-axis force and torque sensor mounted to the base of the station, provides a signal of the lift and drag forces on the package being tested. Position tables mounted on the station allow for the introduction of errors in real time. A computer controlled data acquisition system was developed around a Macintosh IIfx to record the test data and control the speed of the wheel. This paper describes the development of this test facility. A detailed description of the major components is presented. Recently completed tests carried out on a novel Electrodynamic (EDS) suspension system, developed by MIT as part of this joint effort are described and presented. Adaptation of this facility for linear motor and other propulsion and levitation testing is described.

  2. Believability in simplifications of large scale physically based simulation

    KAUST Repository

    Han, Donghui

    2013-01-01

    We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.

  3. Large scale structures in liquid crystal/clay colloids

    International Nuclear Information System (INIS)

    Duijneveldt, Jeroen S van; Klein, Susanne; Leach, Edward; Pizzey, Claire; Richardson, Robert M

    2005-01-01

    Suspensions of three different clays in K15, a thermotropic liquid crystal, have been studied by optical microscopy and small angle x-ray scattering. The three clays were claytone AF, a surface treated natural montmorillonite, laponite RD, a synthetic hectorite, and mined sepiolite. The claytone and laponite were sterically stabilized whereas sepiolite formed a relatively stable suspension in K15 without any surface treatment. Micrographs of the different suspensions revealed that all three suspensions contained large scale structures. The nature of these aggregates was investigated using small angle x-ray scattering. For the clays with sheet-like particles, claytone and laponite, the flocs contain a mixture of stacked and single platelets. The basal spacing in the stacks was independent of particle concentration in the suspension and the phase of the solvent. The number of platelets in the stack and their percentage in the suspension varied with concentration and the aspect ratio of the platelets. The lath shaped sepiolite did not show any tendency to organize into ordered structures. Here the aggregates are networks of randomly oriented single rods

  4. Contextual Compression of Large-Scale Wind Turbine Array Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research (NCAR)

    2017-12-04

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.

  5. Bio-inspired wooden actuators for large scale applications.

    Science.gov (United States)

    Rüggeberg, Markus; Burgert, Ingo

    2015-01-01

    Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.

  6. Multiple Spectral Components in Large-Scale Jets

    Science.gov (United States)

    Meyer, Eileen; Georganopoulos, Markos; Petropoulou, Maria; Breiding, Peter

    2018-01-01

    One of the most striking discoveries of the Chandra X-ray Observatory is the population of bright X-ray emitting jets hosted by powerful quasars. Most of these jets show hard X-ray spectra which requires a separate spectral component compared with the radio-optical synchrotron emission, which usually peaks at or before the infrared. Though the origin of this high-energy spectral component has been a matter of debate for nearly two decades, it is still not understood, with major implications for our understanding of particle acceleration in jets, as well as the total energy carried by them. Until recently the prevailing interpretation for the second component has been inverse-Compont upscattering of the CMB by a still highly relativistic jet at kpc scales. I will briefly describe the recent work calling the IC/CMB model into serious question (including X-ray variability, UV polarization, gamma-ray upper limits, and proper motions), and present new results, based on new ALMA, HST, and Chandra observations, which suggest that more than two distinct spectral components may be present in some large-scale jets, and that these multiple components appear to arise in jets across the full range in jet power, and not just in the most powerful sources. These results are very difficult to reconcile with simple models of jet emission, and I will discuss these failures and some possible directions for the future, including hadronic models.

  7. Results of large scale thyroid dose reconstruction in Ukraine

    International Nuclear Information System (INIS)

    Likhtarev, I.; Sobolev, B.; Kairo, I.; Tabachny, L.; Jacob, P.; Proehl, G.; Goulko, G.

    1996-01-01

    In 1993, the Ukrainian Ministry on Chernobyl Affairs initiated a large scale reconstruction of thyroid exposures to radioiodine after the Chernobyl accident. The objective was to provide the state policy on social compensations with a scientific background. About 7000 settlements from five contaminated regions have gotten certificates of thyroid exposure since then. Certificates contain estimates of the average thyroid dose from 131 I for seven age groups. The primary dose estimates used about 150000 direct measurements of the 131 I activity in the thyroid glands of inhabitants from Chernigiv, Kiev, Zhytomyr, and also Vinnytsa regions. Parameters of the assumed intake function were related to environmental and questionnaire data. The dose reconstruction for the remaining territory was based on empirical relations between intake function parameters and the 137 Cs deposition. The relationship was specified by the distance and the direction to the Chernobyl Nuclear Power Plant. The relations were first derived for territories with direct measurements and then they were spread on other areas using daily iodine releases and atmospheric transportation routes. The results of the dose reconstruction allowed to mark zones on the territory of Ukraine according to the average levels of thyroid exposures. These zones underlay a policy of post-accidental health care and social compensations. Another important application of the thyroid dose reconstruction is the radiation risk assessment of thyroid cancer among people exposed during childhood due to the Chernobyl accident

  8. Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork

    Directory of Open Access Journals (Sweden)

    Yang Xu

    2014-01-01

    Full Text Available Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure. Their organization has been proven as a key factor to influence their performance. To expedite team performance, we have analyzed that there are three key factors. First, complex network effects may be able to promote team performance. Second, coordination interactions coming from their sources are always trying to be routed to capable agents. Although they could be transferred across the network via different paths, their sources and sinks depend on the intrinsic nature of the team which is irrelevant to the network connections. In addition, the agents involved in the same plan often form a subteam and communicate with each other more frequently. Therefore, if the interactions between agents can be statistically recorded, we are able to set up an integrated network adjustment algorithm by combining the three key factors. Based on our abstracted teamwork simulations and the coordination statistics, we implemented the adaptive reorganization algorithm. The experimental results briefly support our design that the reorganized network is more capable of coordinating heterogeneous agents.

  9. LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM

    Science.gov (United States)

    Higgins, G.H.; Crane, W.W.T.

    1959-05-19

    A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)

  10. Large Scale Software Building with CMake in ATLAS

    Science.gov (United States)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.

  11. Comprehensive large-scale assessment of intrinsic protein disorder.

    Science.gov (United States)

    Walsh, Ian; Giollo, Manuel; Di Domenico, Tomás; Ferrari, Carlo; Zimmermann, Olav; Tosatto, Silvio C E

    2015-01-15

    Intrinsically disordered regions are key for the function of numerous proteins. Due to the difficulties in experimental disorder characterization, many computational predictors have been developed with various disorder flavors. Their performance is generally measured on small sets mainly from experimentally solved structures, e.g. Protein Data Bank (PDB) chains. MobiDB has only recently started to collect disorder annotations from multiple experimental structures. MobiDB annotates disorder for UniProt sequences, allowing us to conduct the first large-scale assessment of fast disorder predictors on 25 833 different sequences with X-ray crystallographic structures. In addition to a comprehensive ranking of predictors, this analysis produced the following interesting observations. (i) The predictors cluster according to their disorder definition, with a consensus giving more confidence. (ii) Previous assessments appear over-reliant on data annotated at the PDB chain level and performance is lower on entire UniProt sequences. (iii) Long disordered regions are harder to predict. (iv) Depending on the structural and functional types of the proteins, differences in prediction performance of up to 10% are observed. The datasets are available from Web site at URL: http://mobidb.bio.unipd.it/lsd. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. An integrated system for large scale scanning of nuclear emulsions

    Energy Technology Data Exchange (ETDEWEB)

    Bozza, Cristiano, E-mail: kryss@sa.infn.it [University of Salerno and INFN, via Ponte Don Melillo, Fisciano 84084 (Italy); D’Ambrosio, Nicola [Laboratori Nazionali del Gran Sasso, S.S. 17 BIS km 18.910, Assergi (AQ) 67010 (Italy); De Lellis, Giovanni [University of Napoli and INFN, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); De Serio, Marilisa [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); Di Capua, Francesco [INFN Napoli, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); Di Crescenzo, Antonia [University of Napoli and INFN, Complesso Universitario di Monte Sant' Angelo, via Cintia Ed. G, Napoli 80126 (Italy); Di Ferdinando, Donato [INFN Bologna, viale B. Pichat 6/2, Bologna 40127 (Italy); Di Marco, Natalia [Laboratori Nazionali del Gran Sasso, S.S. 17 BIS km 18.910, Assergi (AQ) 67010 (Italy); Esposito, Luigi Salvatore [Laboratori Nazionali del Gran Sasso, now at CERN, Geneva (Switzerland); Fini, Rosa Anna [INFN Bari, via E. Orabona 4, Bari 70125 (Italy); Giacomelli, Giorgio [University of Bologna and INFN, viale B. Pichat 6/2, Bologna 40127 (Italy); Grella, Giuseppe [University of Salerno and INFN, via Ponte Don Melillo, Fisciano 84084 (Italy); Ieva, Michela [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); Kose, Umut [INFN Padova, via Marzolo 8, Padova (PD) 35131 (Italy); Longhin, Andrea; Mauri, Nicoletta [INFN Laboratori Nazionali di Frascati, via E. Fermi 40, Frascati (RM) 00044 (Italy); Medinaceli, Eduardo [University of Padova and INFN, via Marzolo 8, Padova (PD) 35131 (Italy); Monacelli, Piero [University of L' Aquila and INFN, via Vetoio Loc. Coppito, L' Aquila (AQ) 67100 (Italy); Muciaccia, Maria Teresa; Pastore, Alessandra [University of Bari and INFN, via E. Orabona 4, Bari 70125 (Italy); and others

    2013-03-01

    The European Scanning System, developed to analyse nuclear emulsions at high speed, has been completed with the development of a high level software infrastructure to automate and support large-scale emulsion scanning. In one year, an average installation is capable of performing data-taking and online analysis on a total surface ranging from few m{sup 2} to tens of m{sup 2}, acquiring many billions of tracks, corresponding to several TB. This paper focuses on the procedures that have been implemented and on their impact on physics measurements. The system proved robust, reliable, fault-tolerant and user-friendly, and seldom needs assistance. A dedicated relational Data Base system is the backbone of the whole infrastructure, storing data themselves and not only catalogues of data files, as in common practice, being a unique case in high-energy physics DAQ systems. The logical organisation of the system is described and a summary is given of the physics measurement that are readily available by automated processing.

  13. Global Wildfire Forecasts Using Large Scale Climate Indices

    Science.gov (United States)

    Shen, Huizhong; Tao, Shu

    2016-04-01

    Using weather readings, fire early warning can provided forecast 4-6 hour in advance to minimize fire loss. The benefit would be dramatically enhanced if relatively accurate long-term projection can be also provided. Here we present a novel method for predicting global fire season severity (FSS) at least three months in advance using multiple large-scale climate indices (CIs). The predictive ability is proven effective for various geographic locations and resolution. Globally, as well as in most continents, the El Niño Southern Oscillation (ENSO) is the dominant driving force controlling interannual FSS variability, whereas other CIs also play indispensable roles. We found that a moderate El Niño event is responsible for 465 (272-658 as interquartile range) Tg carbon release and an annual increase of 29,500 (24,500-34,800) deaths from inhalation exposure to air pollutants. Southeast Asia accounts for half of the deaths. Both intercorrelation and interaction of WPs and CIs are revealed, suggesting possible climate-induced modification of fire responses to weather conditions. Our models can benefit fire management in response to climate change.

  14. Implicit solvers for large-scale nonlinear problems

    International Nuclear Information System (INIS)

    Keyes, D E; Reynolds, D; Woodward, C S

    2006-01-01

    Computational scientists are grappling with increasingly complex, multi-rate applications that couple such physical phenomena as fluid dynamics, electromagnetics, radiation transport, chemical and nuclear reactions, and wave and material propagation in inhomogeneous media. Parallel computers with large storage capacities are paving the way for high-resolution simulations of coupled problems; however, hardware improvements alone will not prove enough to enable simulations based on brute-force algorithmic approaches. To accurately capture nonlinear couplings between dynamically relevant phenomena, often while stepping over rapid adjustments to quasi-equilibria, simulation scientists are increasingly turning to implicit formulations that require a discrete nonlinear system to be solved for each time step or steady state solution. Recent advances in iterative methods have made fully implicit formulations a viable option for solution of these large-scale problems. In this paper, we overview one of the most effective iterative methods, Newton-Krylov, for nonlinear systems and point to software packages with its implementation. We illustrate the method with an example from magnetically confined plasma fusion and briefly survey other areas in which implicit methods have bestowed important advantages, such as allowing high-order temporal integration and providing a pathway to sensitivity analyses and optimization. Lastly, we overview algorithm extensions under development motivated by current SciDAC applications

  15. Parallel Framework for Dimensionality Reduction of Large-Scale Datasets

    Directory of Open Access Journals (Sweden)

    Sai Kiranmayee Samudrala

    2015-01-01

    Full Text Available Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.

  16. LARGE-SCALE INDICATIVE MAPPING OF SOIL RUNOFF

    Directory of Open Access Journals (Sweden)

    E. Panidi

    2017-11-01

    Full Text Available In our study we estimate relationships between quantitative parameters of relief, soil runoff regime, and spatial distribution of radioactive pollutants in the soil. The study is conducted on the test arable area located in basin of the upper Oka River (Orel region, Russia. Previously we collected rich amount of soil samples, which make it possible to investigate redistribution of the Chernobyl-origin cesium-137 in soil material and as a consequence the soil runoff magnitude at sampling points. Currently we are describing and discussing the technique applied to large-scale mapping of the soil runoff. The technique is based upon the cesium-137 radioactivity measurement in the different relief structures. Key stages are the allocation of the places for soil sampling points (we used very high resolution space imagery as a supporting data; soil samples collection and analysis; calibration of the mathematical model (using the estimated background value of the cesium-137 radioactivity; and automated compilation of the map (predictive map of the studied territory (digital elevation model is used for this purpose, and cesium-137 radioactivity can be predicted using quantitative parameters of the relief. The maps can be used as a support data for precision agriculture and for recultivation or melioration purposes.

  17. Bio-inspired wooden actuators for large scale applications.

    Directory of Open Access Journals (Sweden)

    Markus Rüggeberg

    Full Text Available Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.

  18. Origin of the large scale structures of the universe

    International Nuclear Information System (INIS)

    Oaknin, David H.

    2004-01-01

    We revise the statistical properties of the primordial cosmological density anisotropies that, at the time of matter-radiation equality, seeded the gravitational development of large scale structures in the otherwise homogeneous and isotropic Friedmann-Robertson-Walker flat universe. Our analysis shows that random fluctuations of the density field at the same instant of equality and with comoving wavelength shorter than the causal horizon at that time can naturally account, when globally constrained to conserve the total mass (energy) of the system, for the observed scale invariance of the anisotropies over cosmologically large comoving volumes. Statistical systems with similar features are generically known as glasslike or latticelike. Obviously, these conclusions conflict with the widely accepted understanding of the primordial structures reported in the literature, which requires an epoch of inflationary cosmology to precede the standard expansion of the universe. The origin of the conflict must be found in the widespread, but unjustified, claim that scale invariant mass (energy) anisotropies at the instant of equality over comoving volumes of cosmological size, larger than the causal horizon at the time, must be generated by fluctuations in the density field with comparably large comoving wavelength

  19. Large scale high strain-rate tests of concrete

    Directory of Open Access Journals (Sweden)

    Kiefer R.

    2012-08-01

    Full Text Available This work presents the stages of development of some innovative equipment, based on Hopkinson bar techniques, for performing large scale dynamic tests of concrete specimens. The activity is centered at the recently upgraded HOPLAB facility, which is basically a split Hopkinson bar with a total length of approximately 200 m and with bar diameters of 72 mm. Through pre-tensioning and suddenly releasing a steel cable, force pulses of up to 2 MN, 250 μs rise time and 40 ms duration can be generated and applied to the specimen tested. The dynamic compression loading has first been treated and several modifications in the basic configuration have been introduced. Twin incident and transmitter bars have been installed with strong steel plates at their ends where large specimens can be accommodated. A series of calibration and qualification tests has been conducted and the first real tests on concrete cylindrical specimens of 20cm diameter and up to 40cm length have commenced. Preliminary results from the analysis of the recorded signals indicate proper Hopkinson bar testing conditions and reliable functioning of the facility.

  20. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  1. A new large-scale plasma source with plasma cathode

    International Nuclear Information System (INIS)

    Yamauchi, K.; Hirokawa, K.; Suzuki, H.; Satake, T.

    1996-01-01

    A new large-scale plasma source (200 mm diameter) with a plasma cathode has been investigated. The plasma has a good spatial uniformity, operates at low electron temperature, and is highly ionized under relatively low gas pressure of about 10 -4 Torr. The plasma source consists of a plasma chamber and a plasma cathode generator. The plasma chamber has an anode which is 200 mm in diameter, 150 mm in length, is made of 304 stainless steel, and acts as a plasma expansion cup. A filament-cathode-like plasma ''plasma cathode'' is placed on the central axis of this source. To improve the plasma spatial uniformity in the plasma chamber, a disk-shaped, floating electrode is placed between the plasma chamber and the plasma cathode. The 200 mm diameter plasma is measure by using Langmuir probes. As a result, the discharge voltage is relatively low (30-120 V), the plasma space potential is almost equal to the discharge voltage and can be easily controlled, the electron temperature is several electron volts, the plasma density is about 10 10 cm -3 , and the plasma density is about 10% variance in over a 100 mm diameter. (Author)

  2. Large-scale navigational map in a mammal.

    Science.gov (United States)

    Tsoar, Asaf; Nathan, Ran; Bartan, Yoav; Vyssotski, Alexei; Dell'Omo, Giacomo; Ulanovsky, Nachum

    2011-09-13

    Navigation, the ability to reach desired goal locations, is critical for animals and humans. Animal navigation has been studied extensively in birds, insects, and some marine vertebrates and invertebrates, yet we are still far from elucidating the underlying mechanisms in other taxonomic groups, especially mammals. Here we report a systematic study of the mechanisms of long-range mammalian navigation. High-resolution global positioning system tracking of bats was conducted here, which revealed high, fast, and very straight commuting flights of Egyptian fruit bats (Rousettus aegyptiacus) from their cave to remote fruit trees. Bats returned to the same individual trees night after night. When displaced 44 km south, bats homed directly to one of two goal locations--familiar fruit tree or cave--ruling out beaconing, route-following, or path-integration mechanisms. Bats released 84 km south, within a deep natural crater, were initially disoriented (but eventually left the crater toward the home direction and homed successfully), whereas bats released at the crater-edge top homed directly, suggesting navigation guided primarily by distal visual landmarks. Taken together, these results provide evidence for a large-scale "cognitive map" that enables navigation of a mammal within its visually familiar area, and they also demonstrate the ability to home back when translocated outside the visually familiar area.

  3. Safety aspects of large-scale combustion of hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Edeskuty, F.J.; Haugh, J.J.; Thompson, R.T.

    1986-01-01

    Recent hydrogen-safety investigations have studied the possible large-scale effects from phenomena such as the accumulation of combustible hydrogen-air mixtures in large, confined volumes. Of particular interest are safe methods for the disposal of the hydrogen and the pressures which can arise from its confined combustion. Consequently, tests of the confined combustion of hydrogen-air mixtures were conducted in a 2100 m/sup 3/ volume. These tests show that continuous combustion, as the hydrogen is generated, is a safe method for its disposal. It also has been seen that, for hydrogen concentrations up to 13 vol %, it is possible to predict maximum pressures that can occur upon ignition of premixed hydrogen-air atmospheres. In addition information has been obtained concerning the survivability of the equipment that is needed to recover from an accident involving hydrogen combustion. An accident that involved the inadvertent mixing of hydrogen and oxygen gases in a tube trailer gave evidence that under the proper conditions hydrogen combustion can transit to a detonation. If detonation occurs the pressures which can be experienced are much higher although short in duration.

  4. Characterizing unknown systematics in large scale structure surveys

    International Nuclear Information System (INIS)

    Agarwal, Nishant; Ho, Shirley; Myers, Adam D.; Seo, Hee-Jong; Ross, Ashley J.; Bahcall, Neta; Brinkmann, Jonathan; Eisenstein, Daniel J.; Muna, Demitri; Palanque-Delabrouille, Nathalie; Yèche, Christophe; Pâris, Isabelle; Petitjean, Patrick; Schneider, Donald P.; Streblyanska, Alina; Weaver, Benjamin A.

    2014-01-01

    Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data, we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study

  5. Characterizing unknown systematics in large scale structure surveys

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Nishant; Ho, Shirley [McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Seo, Hee-Jong [Berkeley Center for Cosmological Physics, LBL and Department of Physics, University of California, Berkeley, CA 94720 (United States); Ross, Ashley J. [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth, PO1 3FX (United Kingdom); Bahcall, Neta [Princeton University Observatory, Peyton Hall, Princeton, NJ 08544 (United States); Brinkmann, Jonathan [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349 (United States); Eisenstein, Daniel J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Muna, Demitri [Department of Astronomy, Ohio State University, Columbus, OH 43210 (United States); Palanque-Delabrouille, Nathalie; Yèche, Christophe [CEA, Centre de Saclay, Irfu/SPP, F-91191 Gif-sur-Yvette (France); Pâris, Isabelle [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago (Chile); Petitjean, Patrick [Université Paris 6 et CNRS, Institut d' Astrophysique de Paris, 98bis blvd. Arago, 75014 Paris (France); Schneider, Donald P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Streblyanska, Alina [Instituto de Astrofisica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Weaver, Benjamin A., E-mail: nishanta@andrew.cmu.edu [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2014-04-01

    Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data, we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study.

  6. HTS cables open the window for large-scale renewables

    International Nuclear Information System (INIS)

    Geschiere, A; Willen, D; Piga, E; Barendregt, P

    2008-01-01

    In a realistic approach to future energy consumption, the effects of sustainable power sources and the effects of growing welfare with increased use of electricity need to be considered. These factors lead to an increased transfer of electric energy over the networks. A dominant part of the energy need will come from expanded large-scale renewable sources. To use them efficiently over Europe, large energy transits between different countries are required. Bottlenecks in the existing infrastructure will be avoided by strengthening the network. For environmental reasons more infrastructure will be built underground. Nuon is studying the HTS technology as a component to solve these challenges. This technology offers a tremendously large power transport capacity as well as the possibility to reduce short circuit currents, making integration of renewables easier. Furthermore, power transport will be possible at lower voltage levels, giving the opportunity to upgrade the existing network while re-using it. This will result in large cost savings while reaching the future energy challenges. In a 6 km backbone structure in Amsterdam Nuon wants to install a 50 kV HTS Triax cable for a significant increase of the transport capacity, while developing its capabilities. Nevertheless several barriers have to be overcome

  7. Large scale instabilities and dynamics of the magnetotail plasma sheet

    International Nuclear Information System (INIS)

    Birn, J.; Schindler, K.

    1986-01-01

    The stability properties of the magnetotail current sheet against large scale modes is reviewed in the framework of ideal MHD, resistive MHD, and collisionless Vlasov theory. It appears that the small deviations from a plane sheet pinch (in particular a magnetic field component normal to the sheet) are important to explain the transition of the tail from a quiet stable state to an unstable dynamic state. It is found that the tail is essentially stable in ideal MHD, but unstable in resistive MHD, while both stable and unstable configurations are found within collisionless theory. The results favor an interpretation where the onset of magnetotail dyanmics leading to a sudden thinning of the plasma sheet and the ejection of a plasmoid is caused by the onset of a collisionless instability that either directly leads to the growth of a collisionless tearing mode or via microscopic turbulence to the growth of a resistive mode. The actual onset conditions are not fully explored yet by rigorous methods. The onset may be triggered by local conditions as well as by boundary conditions at the ionosphere or at the magnetopause (resulting from solar wind conditions). 53 refs., 5 figs

  8. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  9. Large-Scale Seismic Test Program at Hualien, Taiwan

    International Nuclear Information System (INIS)

    Tang, H.T.; Graves, H.L.; Chen, P.C.

    1992-01-01

    The Large-Scale Seismic Test (LSST) Program at Hualien, Taiwan, is a follow-on to the soil-structure interaction (SSI) experiments at Lotung, Taiwan. The planned SSI studies will be performed at a stiff soil site in Hualien, Taiwan, that historically has had slightly more destructive earthquakes in the past than Lotung. The LSST is a joint effort among many interested parties. Electric Power Research Institute (EPRI) and Taipower are the organizers of the program and have the lead in planning and managing the program. Other organizations participating in the LSST program are US Nuclear Regulatory Commission, the Central Research Institute of Electric Power Industry, the Tokyo Electric Power Company, the Commissariat A L'Energie Atomique, Electricite de France and Framatome. The LSST was initiated in January 1990, and is envisioned to be five years in duration. Based on the assumption of stiff soil and confirmed by soil boring and geophysical results the test model was designed to provide data needed for SSI studies covering: free-field input, nonlinear soil response, non-rigid body SSI, torsional response, kinematic interaction, spatial incoherency and other effects. Taipower had the lead in design of the test model and received significant input from other LSST members. Questions raised by LSST members were on embedment effects, model stiffness, base shear, and openings for equipment. This paper describes progress in site preparation, design and construction of the model and development of an instrumentation plan

  10. Large Scale Synthesis of Carbon Nanofibres on Sodium Chloride Support

    Directory of Open Access Journals (Sweden)

    Ravindra Rajarao

    2012-06-01

    Full Text Available Large scale synthesis of carbon nanofibres (CNFs on a sodium chloride support has been achieved. CNFs have been synthesized using metal oxalate (Ni, Co and Fe as catalyst precursors at 680 C by chemical vapour deposition method. Upon pyrolysis, this catalyst precursors yield catalyst nanoparticles directly. The sodium chloride was used as a catalyst support, it was chosen because of its non‐toxic and water soluble nature. Problems, such as the detrimental effect of CNFs, the detrimental effects on the environment and even cost, have been avoided by using a water soluble support. The structure of products was characterized by scanning electron microscopy, transmission electron microscopy and Raman spectroscopy. The purity of the grown products and purified products were determined by the thermal analysis and X‐ray diffraction method. Here we report the 7600, 7000 and 6500 wt% yield of CNFs synthesized over nickel, cobalt and iron oxalate. The long, curved and worm shaped CNFs were obtained on Ni, Co and Fe catalysts respectively. The lengthy process of calcination and reduction for the preparation of catalysts is avoided in this method. This synthesis route is simple and economical, hence, it can be used for CNF synthesis in industries.

  11. Large-scale spatial population databases in infectious disease research

    Directory of Open Access Journals (Sweden)

    Linard Catherine

    2012-03-01

    Full Text Available Abstract Modelling studies on the spatial distribution and spread of infectious diseases are becoming increasingly detailed and sophisticated, with global risk mapping and epidemic modelling studies now popular. Yet, in deriving populations at risk of disease estimates, these spatial models must rely on existing global and regional datasets on population distribution, which are often based on outdated and coarse resolution data. Moreover, a variety of different methods have been used to model population distribution at large spatial scales. In this review we describe the main global gridded population datasets that are freely available for health researchers and compare their construction methods, and highlight the uncertainties inherent in these population datasets. We review their application in past studies on disease risk and dynamics, and discuss how the choice of dataset can affect results. Moreover, we highlight how the lack of contemporary, detailed and reliable data on human population distribution in low income countries is proving a barrier to obtaining accurate large-scale estimates of population at risk and constructing reliable models of disease spread, and suggest research directions required to further reduce these barriers.

  12. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  13. Discriminant WSRC for Large-Scale Plant Species Recognition

    Directory of Open Access Journals (Sweden)

    Shanwen Zhang

    2017-01-01

    Full Text Available In sparse representation based classification (SRC and weighted SRC (WSRC, it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.

  14. The BAHAMAS project: the CMB-large-scale structure tension and the roles of massive neutrinos and galaxy formation

    Science.gov (United States)

    McCarthy, Ian G.; Bird, Simeon; Schaye, Joop; Harnois-Deraps, Joachim; Font, Andreea S.; van Waerbeke, Ludovic

    2018-02-01

    Recent studies have presented evidence for tension between the constraints on Ωm and σ8 from the cosmic microwave background (CMB) and measurements of large-scale structure (LSS). This tension can potentially be resolved by appealing to extensions of the standard model of cosmology and/or untreated systematic errors in the modelling of LSS, of which baryonic physics has been frequently suggested. We revisit this tension using, for the first time, carefully-calibrated cosmological hydrodynamical simulations, which thus capture the back reaction of the baryons on the total matter distribution. We have extended the BAHAMAS simulations to include a treatment of massive neutrinos, which currently represents the best motivated extension to the standard model. We make synthetic thermal Sunyaev-Zel'dovich effect, weak galaxy lensing, and CMB lensing maps and compare to observed auto- and cross-power spectra from a wide range of recent observational surveys. We conclude that: i) in general there is tension between the primary CMB and LSS when adopting the standard model with minimal neutrino mass; ii) after calibrating feedback processes to match the gas fractions of clusters, the remaining uncertainties in the baryonic physics modelling are insufficient to reconcile this tension; and iii) if one accounts for internal tensions in the Planck CMB dataset (by allowing the lensing amplitude, ALens, to vary), invoking a non-minimal neutrino mass, typically of 0.2-0.4 eV, can resolve the tension. This solution is fully consistent with separate constraints from the primary CMB and baryon acoustic oscillations.

  15. Large Scale Magnetic Fields: Density Power Spectrum in Redshift ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    z g1(t)g2(t)〈vdz(−k)vcz(k)〉. (22). Here P (k) = 〈δ(k)δ(−k)〉 is the real space power spectrum. It is derived in Appendix. A. f (t ) gives the evolution of density perturbations (equation (11) and Fig. 1). The correlations involving the divergence part of the velocity fields can be readily written using the continuity equation (equation 2):.

  16. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Science.gov (United States)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-11-01

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  17. Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools

    Energy Technology Data Exchange (ETDEWEB)

    Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew

    2017-10-18

    We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy–galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.

  18. North Atlantic explosive cyclones and large scale atmospheric variability modes

    Science.gov (United States)

    Liberato, Margarida L. R.

    2015-04-01

    Extreme windstorms are one of the major natural catastrophes in the extratropics, one of the most costly natural hazards in Europe and are responsible for substantial economic damages and even fatalities. During the last decades Europe witnessed major damage from winter storms such as Lothar (December 1999), Kyrill (January 2007), Klaus (January 2009), Xynthia (February 2010), Gong (January 2013) and Stephanie (February 2014) which exhibited uncommon characteristics. In fact, most of these storms crossed the Atlantic in direction of Europe experiencing an explosive development at unusual lower latitudes along the edge of the dominant North Atlantic storm track and reaching Iberia with an uncommon intensity (Liberato et al., 2011; 2013; Liberato 2014). Results show that the explosive cyclogenesis process of most of these storms at such low latitudes is driven by: (i) the southerly displacement of a very strong polar jet stream; and (ii) the presence of an atmospheric river (AR), that is, by a (sub)tropical moisture export over the western and central (sub)tropical Atlantic which converges into the cyclogenesis region and then moves along with the storm towards Iberia. Previous studies have pointed to a link between the North Atlantic Oscillation (NAO) and intense European windstorms. On the other hand, the NAO exerts a decisive control on the average latitudinal location of the jet stream over the North Atlantic basin (Woollings et al. 2010). In this work the link between North Atlantic explosive cyclogenesis, atmospheric rivers and large scale atmospheric variability modes is reviewed and discussed. Liberato MLR (2014) The 19 January 2013 windstorm over the north Atlantic: Large-scale dynamics and impacts on Iberia. Weather and Climate Extremes, 5-6, 16-28. doi: 10.1016/j.wace.2014.06.002 Liberato MRL, Pinto JG, Trigo IF, Trigo RM. (2011) Klaus - an exceptional winter storm over Northern Iberia and Southern France. Weather 66:330-334. doi:10.1002/wea.755 Liberato

  19. Large-scale CO2 storage — Is it feasible?

    Science.gov (United States)

    Johansen, H.

    2013-06-01

    CCS is generally estimated to have to account for about 20% of the reduction of CO2 emissions to the atmosphere. This paper focuses on the technical aspects of CO2 storage, even if the CCS challenge is equally dependent upon finding viable international solutions to a wide range of economic, political and cultural issues. It has already been demonstrated that it is technically possible to store adequate amounts of CO2 in the subsurface (Sleipner, InSalah, Snøhvit). The large-scale storage challenge (several Gigatons of CO2 per year) is more an issue of minimizing cost without compromising safety, and of making international regulations.The storage challenge may be split into 4 main parts: 1) finding reservoirs with adequate storage capacity, 2) make sure that the sealing capacity above the reservoir is sufficient, 3) build the infrastructure for transport, drilling and injection, and 4) set up and perform the necessary monitoring activities. More than 150 years of worldwide experience from the production of oil and gas is an important source of competence for CO2 storage. The storage challenge is however different in three important aspects: 1) the storage activity results in pressure increase in the subsurface, 2) there is no production of fluids that give important feedback on reservoir performance, and 3) the monitoring requirement will have to extend for a much longer time into the future than what is needed during oil and gas production. An important property of CO2 is that its behaviour in the subsurface is significantly different from that of oil and gas. CO2 in contact with water is reactive and corrosive, and may impose great damage on both man-made and natural materials, if proper precautions are not executed. On the other hand, the long-term effect of most of these reactions is that a large amount of CO2 will become immobilized and permanently stored as solid carbonate minerals. The reduced opportunity for direct monitoring of fluid samples close to the

  20. Large-scale CO2 storage — Is it feasible?

    Directory of Open Access Journals (Sweden)

    Johansen H.

    2013-06-01

    Full Text Available CCS is generally estimated to have to account for about 20% of the reduction of CO2 emissions to the atmosphere. This paper focuses on the technical aspects of CO2 storage, even if the CCS challenge is equally dependent upon finding viable international solutions to a wide range of economic, political and cultural issues. It has already been demonstrated that it is technically possible to store adequate amounts of CO2 in the subsurface (Sleipner, InSalah, Snøhvit. The large-scale storage challenge (several Gigatons of CO2 per year is more an issue of minimizing cost without compromising safety, and of making international regulations.The storage challenge may be split into 4 main parts: 1 finding reservoirs with adequate storage capacity, 2 make sure that the sealing capacity above the reservoir is sufficient, 3 build the infrastructure for transport, drilling and injection, and 4 set up and perform the necessary monitoring activities. More than 150 years of worldwide experience from the production of oil and gas is an important source of competence for CO2 storage. The storage challenge is however different in three important aspects: 1 the storage activity results in pressure increase in the subsurface, 2 there is no production of fluids that give important feedback on reservoir performance, and 3 the monitoring requirement will have to extend for a much longer time into the future than what is needed during oil and gas production. An important property of CO2 is that its behaviour in the subsurface is significantly different from that of oil and gas. CO2 in contact with water is reactive and corrosive, and may impose great damage on both man-made and natural materials, if proper precautions are not executed. On the other hand, the long-term effect of most of these reactions is that a large amount of CO2 will become immobilized and permanently stored as solid carbonate minerals. The reduced opportunity for direct monitoring of fluid samples