WorldWideScience

Sample records for air reveals large-scale

  1. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.;

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  2. Large scale air monitoring: Biological indicators versus air particulate matter

    International Nuclear Information System (INIS)

    Biological indicator organisms are widely used for monitoring and banking purposes since many years. Although the complexity of the interactions between bioorganisms and their environment is generally not easily comprehensible, environmental quality assessment using the bioindicator approach offers some convincing advantages compared to direct analysis of soil, water, or air. Direct measurement of air particulates is restricted to experienced laboratories with access to expensive sampling equipment. Additionally, the amount of material collected generally is just enough for one determination per sampling and no multidimensional characterization might be possible. Further, fluctuations in air masses have a pronounced effect on the results from air filter sampling. Combining the integrating property of bioindicators with the world wide availability and uniform matrix characteristics of air particulates as a prerequisite for global monitoring of air pollution will be discussed. A new approach for sampling urban dust using large volume filtering devices installed in air conditioners of large hotel buildings is assessed. A first experiment was initiated to collect air particulates (300 to 500 g each) from a number of hotels during a period of three to four months by successive vacuum cleaning of used inlet filters from high volume air conditioning installations reflecting average concentrations per three months in different large cities. This approach is expected to be upgraded and applied for global monitoring. Highly positive correlated elements were found in lichen such as K/S, Zn/P, the rare earth elements (REE) and a significant negative correlation between Fig and Cu was observed in these samples. The ratio of concentrations of elements in dust and Usnea spp. is highest for Cr, Zn, and Fe (400-200) and lowest for elements such as Ca, Rb, and Sr (20-10). (author)

  3. Conclusions of the NATO ARW on Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  4. Running Large-Scale Air Pollution Models on Parallel Computers

    DEFF Research Database (Denmark)

    Georgiev, K.; Zlatev, Z.

    2000-01-01

    Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria.......Proceedings of the 23rd NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held 28 September - 2 October 1998, in Varna, Bulgaria....

  5. Dynamic behaviour of air valves in a large-scale pipeline apparatus

    OpenAIRE

    Bergant, Anton; Kruisbrink, Arno; Arregui, F.

    2015-01-01

    This paper describes an experimental programme on the dynamic behaviour of air valves performed in a large-scale pipeline apparatus. Dynamic flow tests were performed at large (full) scale, since previous quasi-steady flow tests at small scale did not lead to realistic results. Investigations in a large-scale pipeline apparatus lead to a better understanding of the physical processes associated with the dynamic performance of air valves. Float type air valves of nominal diameter of 50 and 100...

  6. Changes in large-scale air circulation and connection with climate variables in Romania

    Science.gov (United States)

    Stefan, Sabina; Barbu, Nicu

    2016-04-01

    The aim of this paper is the analysis of the relationship between climate variables (seasonal mean air temperature - T2m, and seasonal amount of precipitation - PP) and large-scale air circulation. In order to achieve this, the air circulation types were derived from GrossWetterTypen (GWT) and WetterLagenKlassifikation (WLK) Catalogues developed within COST733 framework. Daily air circulation types are divided into 18 groups according to the GWT Catalogue and in 40 groups according to the WLK Catalogue and for each type winter (DJF) and summer (JJA) occurrence frequency were calculated. To this end the Pearson correlation coefficient between climate variables and circulation type's frequency were computed. The results reveals that in wintertime the GWT circulation types captures better than WLK circulation types the T2m variability in time that for summer the WLK circulation types captures better than GWT circulation types. This is due to the seasonal variability of the horizontal extension of air masses. We found that the T2m is positive correlated to anticyclonic circulation types and negative correlated with cyclonic types and the PP is correlated to the cyclonic circulation and negative correlated to anticyclonic ones. Additionally, the trend significance of the climate variables as well as air circulation types have been analysed with the non-parametric Mann-Kendall test. The changes of the trends were detected by employing the non-parametric Pettit test. From the trend analysis we can state that some of the anticyclonic circulation types presents upward tendency and some of the cyclonic circulation presents downward tendency. This is an important results because explain the upward trend of the T2m and the downward trend of the PP.

  7. The application of liquid air energy storage for large scale long duration solutions to grid balancing

    OpenAIRE

    Brett Gareth; Barnett Matthew

    2014-01-01

    Liquid Air Energy Storage (LAES) provides large scale, long duration energy storage at the point of demand in the 5 MW/20 MWh to 100 MW/1,000 MWh range. LAES combines mature components from the industrial gas and electricity industries assembled in a novel process and is one of the few storage technologies that can be delivered at large scale, with no geographical constraints. The system uses no exotic materials or scarce resources and all major components have a proven lifetime of 25+ years....

  8. The influence of large-scale structures on entrainment in a decelerating transient turbulent jet revealed by large eddy simulation

    Science.gov (United States)

    Hu, Bing; Musculus, Mark P. B.; Oefelein, Joseph C.

    2012-04-01

    To provide a better understanding of the fluid mechanical mechanisms governing entrainment in decelerating jets, we performed a large eddy simulation (LES) of a transient air jet. The ensemble-averaged LES calculations agree well with the available measurements of centerline velocity, and they reveal a region of increased entrainment that grows as it propagates downstream during deceleration. Within the temporal and spatial domains of the simulation, entrainment during deceleration temporarily increases by roughly a factor of two over that of the quasi-steady jet, and thereafter decays to a level lower than the quasi-steady jet. The LES results also provide large-structure flow details that lend insight into the effects of deceleration on entrainment. The simulations show greater growth and separation of large vortical structures during deceleration. Ambient fluid is engulfed into the gaps between the large-scale structures, causing large-scale indentations in the scalar jet boundary. The changes in the growth and separation of large structures during deceleration are attributed to changes in the production and convection of vorticity. Both the absolute and normalized scalar dissipation rates decrease during deceleration, implying that changes in small-scale mixing during deceleration do not play an important role in the increased entrainment. Hence, the simulations predict that entrainment in combustion devices may be controlled by manipulating the fuel-jet boundary conditions, which affect structures at large scales much more than at small scales.

  9. Overlapping communities reveal rich structure in large-scale brain networks during rest and task conditions.

    Science.gov (United States)

    Najafi, Mahshid; McMenamin, Brenton W; Simon, Jonathan Z; Pessoa, Luiz

    2016-07-15

    Large-scale analysis of functional MRI data has revealed that brain regions can be grouped into stable "networks" or communities. In many instances, the communities are characterized as relatively disjoint. Although recent work indicates that brain regions may participate in multiple communities (for example, hub regions), the extent of community overlap is poorly understood. To address these issues, here we investigated large-scale brain networks based on "rest" and task human functional MRI data by employing a mixed-membership Bayesian model that allows each brain region to belong to all communities simultaneously with varying membership strengths. The approach allowed us to 1) compare the structure of disjoint and overlapping communities; 2) determine the relationship between functional diversity (how diverse is a region's functional activation repertoire) and membership diversity (how diverse is a region's affiliation to communities); 3) characterize overlapping community structure; 4) characterize the degree of non-modularity in brain networks; 5) study the distribution of "bridges", including bottleneck and hub bridges. Our findings revealed the existence of dense community overlap that was not limited to "special" hubs. Furthermore, the findings revealed important differences between community organization during rest and during specific task states. Overall, we suggest that dense overlapping communities are well suited to capture the flexible and task dependent mapping between brain regions and their functions. PMID:27129758

  10. Large-Scale Atmospheric Variability in AIRS CO2 and O3

    Science.gov (United States)

    Li, Q.; Jiang, X.; Chahine, M.; Yung, Y.; Olsen, E.; Chen, L.

    2006-12-01

    We present a modeling analysis of carbon dioxide (CO2) and ozone (O3) from AIRS with results from two atmospheric chemistry and transport models (CTMs), in the context of the large-scale atmospheric transport. AIRS data, from selected periods in 2003 are retrieved applying the Vanishing Partial Derivative (VPD) method (Chahine et al. [GRL, 2005] and the presentation by Chahine et al., this meeting). Corresponding model results are simulated by 2-D and 3-D atmospheric CTMs. The AIRS retrieved and model simulated CO2 mixing ratios, averaged over 300-500 hPa, are compared with the Matsueda et al. observations in the tropics between 9 and 13 km (see the presentation by Jiang et al., this meeting). The latitudinal distributions of O3, both retrieved and simulated, are compared with ozonesonde data. Both comparisons show reasonable agreement. We then examine the spatiotemporal variabilities of CO2 and O3 and their correlation, both in the AIRS data and model results. Our objective is to better understand the AIRS observed atmospheric variability in CO2 that is associated with underlying large-scale atmospheric transport, particularly the stratosphere-troposphere- exchange (STE) at northern high latitudes in spring and the Asian monsoon summer circulation over South Asia.

  11. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES

    Directory of Open Access Journals (Sweden)

    Zhongguang Fu

    2015-08-01

    Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.

  12. Progress of Large-Scale Air-Sea Interaction Studies in China

    Institute of Scientific and Technical Information of China (English)

    蒲书箴; 赵进平; 于卫东; 赵永平; 杨波

    2004-01-01

    This paper summarizes the progress of large-scale air-sea interaction studies that has been achieved in China in the four-year period from July 1998 to July 2002, including seven aspects in the area of the air-sea interaction, namely air-sea interaction related to the tropical Pacific Ocean, monsoon-related air-sea interaction, air-sea interaction in the north Pacific Ocean, air-sea interaction in the Indian Ocean, air-sea interactions in the global oceans, field experiments, and oceanic cruise surveys. However more attention has been paid to the first and the second aspects because a large number of papers in the reference literature for preparing and organizing this paper are concentrated in the tropical Pacific Ocean, such as the ENSO process with its climatic effects and dynamics, and the monsoon-related air-sea interaction. The literature also involves various phenomena with their different time and spatial scales such as intraseasonal, annual, interannual, and interdecadal variabilities in the atmosphere/ocean interaction system, reflecting the contemporary themes in the four-year period at the beginning of an era from the post-TOGA to CLIVAR studies. Apparently, it is a difficult task to summarize the great progress in this area, as it is extracted from a large quantity of literature, although the authors tried very hard.

  13. A Feasibility Study on Operating Large Scale Compressed Air Energy Storage in Porous Formations

    Science.gov (United States)

    Wang, B.; Pfeiffer, W. T.; Li, D.; Bauer, S.

    2015-12-01

    Compressed air energy storage (CAES) in porous formations has been considered as one promising option of large scale energy storage for decades. This study, hereby, aims at analyzing the feasibility of operating large scale CAES in porous formations and evaluating the performance of underground porous gas reservoirs. To address these issues quantitatively, a hypothetic CAES scenario with a typical anticline structure in northern Germany was numerically simulated. Because of the rapid growth in photovoltaics, the period of extraction in a daily cycle was set to the early morning and the late afternoon in order to bypass the massive solar energy production around noon. The gas turbine scenario was defined referring to the specifications of the Huntorf CAES power plant. The numerical simulations involved two stages, i.e. initial fill and cyclic operation, and both were carried out using the Eclipse E300 simulator (Schlumberger). Pressure loss in the gas wells was post analyzed using an analytical solution. The exergy concept was applied to evaluate the potential energy amount stored in the specific porous formation. The simulation results show that porous formations prove to be a feasible solution of large scale CAES. The initial fill with shut-in periods determines the spatial distribution of the gas phase and helps to achieve higher gas saturation around the wells, and thus higher deliverability. The performance evaluation shows that the overall exergy flow of stored compressed air is also determined by the permeability, which directly affects the deliverability of the gas reservoir and thus the number of wells required.

  14. Maps on large-scale air quality concentrations in the Netherlands. Report on 2008

    International Nuclear Information System (INIS)

    Decrease expected in the number of locations exceeding the air quality limit values In the Netherlands, the number of locations were the European limit values for particulate matter and nitrogen dioxide will be exceeded is expected to decrease by 70-90%, in the period up to 2011, respectively 2015. The limit value for particulate matter from 2011 onwards, and for nitrogen dioxide from 2015 onwards, is expected to be exceeded at a small number of locations in the Netherlands, based on standing and proposed Dutch and European policies. These locations are situated mainly in the Randstad, Netherlands, in the vicinity of motorway around the large cities and in the busiest streets in large cities. Whether the limit values will actually be exceeded depends also on local policies and meteorological fluctuations. This estimate is based on large-scale concentration maps (called GCN maps) of air quality components and on additional local contributions. The concentration maps provide the best possible estimate of large-scale air quality. The degree of uncertainty about the local concentrations of particulate matter and nitrogen dioxide is estimated to be approximately 20%. This report presents the methods used to produce the GCN maps and the included emissions. It also shows the differences with respect to the maps of 2007. These maps are used by local, provincial and other authorities. MNP emphasises to keep the uncertainties in the concentrations in mind when using these maps for planning, or when comparing concentrations with limit values. This also applies to the selecting of local measures to improve the air quality. The concentration maps are available online, at http://www.mnp.nl/gcn.html

  15. Maps on large-scale air quality concentrations in the Netherlands. Report on 2009

    International Nuclear Information System (INIS)

    In the Netherlands, the number of locations where the European limit values for particulate matter and nitrogen dioxide concentrations could be exceeded is lower than was estimated last year. The limit value for particulate matter, from 2011 onwards, is possibly be exceeded at only a few locations in the Netherlands, based on standing and proposed national and European policies. These locations are situated mainly in the Randstad area in the Netherlands, in the vicinity of motorways around the large cities, and close to stables in agricultural areas. The limit value for nitrogen dioxide, from 2015 onwards, is possibly to be exceeded along 100 kilometres of roads in cities and along 50 kilometres of motorways. Whether the limit values will actually be exceeded depends also on local policies and meteorological fluctuations. This estimate was based on large-scale concentration maps (called GCN maps) of air quality components, and on additional local contributions. The concentration maps provided the best possible estimate of large-scale air quality. The degree of uncertainty in local concentrations of particulate matter and nitrogen dioxide was estimated to be approximately 15 to 20%. This report presents the methods and emissions used for producing the GCN maps. It also shows the differences with respect to the maps of 2008. These maps are used by local, provincial and other authorities to define additional local measures. PBL would like to emphasise that uncertainties in the concentrations must be kept in mind when using these maps for planning, or when comparing concentrations with limit values. This also applies to the selecting of local measures to improve the air quality. The concentration maps are available online, at http://www. pbl.nl/gcn

  16. The application of liquid air energy storage for large scale long duration solutions to grid balancing

    Directory of Open Access Journals (Sweden)

    Brett Gareth

    2014-01-01

    Full Text Available Liquid Air Energy Storage (LAES provides large scale, long duration energy storage at the point of demand in the 5 MW/20 MWh to 100 MW/1,000 MWh range. LAES combines mature components from the industrial gas and electricity industries assembled in a novel process and is one of the few storage technologies that can be delivered at large scale, with no geographical constraints. The system uses no exotic materials or scarce resources and all major components have a proven lifetime of 25+ years. The system can also integrate low grade waste heat to increase power output. Founded in 2005, Highview Power Storage, is a UK based developer of LAES. The company has taken the concept from academic analysis, through laboratory testing, and in 2011 commissioned the world's first fully integrated system at pilot plant scale (300 kW/2.5 MWh hosted at SSE's (Scottish & Southern Energy 80 MW Biomass Plant in Greater London which was partly funded by a Department of Energy and Climate Change (DECC grant. Highview is now working with commercial customers to deploy multi MW commercial reference plants in the UK and abroad.

  17. An air-liquid contactor for large-scale capture of CO2 from air.

    Science.gov (United States)

    Holmes, Geoffrey; Keith, David W

    2012-09-13

    We present a conceptually simple method for optimizing the design of a gas-liquid contactor for capture of carbon dioxide from ambient air, or 'air capture'. We apply the method to a slab geometry contactor that uses components, design and fabrication methods derived from cooling towers. We use mass transfer data appropriate for capture using a strong NaOH solution, combined with engineering and cost data derived from engineering studies performed by Carbon Engineering Ltd, and find that the total costs for air contacting alone-no regeneration-can be of the order of $60 per tonne CO(2). We analyse the reasons why our cost estimate diverges from that of other recent reports and conclude that the divergence arises from fundamental design choices rather than from differences in costing methodology. Finally, we review the technology risks and conclude that they can be readily addressed by prototype testing.

  18. Relationship between climate extremes in Romania and their connection to large-scale air circulation

    Science.gov (United States)

    Barbu, Nicu; Ştefan, Sabina

    2015-04-01

    The aim of this paper is to investigate the connection between climate extremes (temperature and precipitation) in Romania and large-scale air circulation. Daily observational data of maximum air temperature and amount of precipitation for the period 1961-2010 were used to compute two seasonal indices associated with temperature and precipitation, quantifying their frequency, as follows: frequency of very warm days (FTmax90 ≥ 90th percentile), frequency of very wet days (FPp90; daily precipitation amount ≥ 90th percentile). Seasonally frequency of circulation types were calculated from daily circulation types determined by using two objective catalogues (GWT - GrossWetter-Typen and WLK - WetterLargenKlassifikation) from the COST733Action. Daily reanalysis data sets (sea level pressure, geopotential height at 925 and 500 hPa, u and v components of wind vector at 700 hPa and precipitable water content for the entire atmospheric column) build up by NCEP/NCAR, with 2.5°/2.5° lat/lon spatial resolution, were used to determine the circulation types. In order to select the optimal domain size related to the FTmax90 and the FPp90, the explained variance (EV) has been used. The EV determines the relation between the variance among circulation types and the total variance of the variable under consideration. This method quantifies the discriminatory power of a classification. The relationships between climate extremes in Romania and large-scale air circulation were investigated by using multiple linear regression model (MLRM), the predictands are FTmax90 and FPp90 and the circulation types were used as predictors. In order to select the independent predictors to build the MLRM the collinearity and multicollinearity analysis were performed. The study period is dividend in two periods: the period 1961-2000 is used to train the MLRM and the period 2001-2010 is used to validate the MLRM. The analytical relationship obtained by using MLRM can be used for future projection

  19. Observational Features of Large-Scale Structures as Revealed by the Catastrophe Model of Solar Eruptions

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Large-scale magnetic structures are the main carrier of major eruptions in the solar atmosphere. These structures are rooted in the photosphere and are driven by the unceasing motion of the photospheric material through a series of equilibrium configurations. The motion brings energy into the coronal magnetic field until the system ceases to be in equilibrium. The catastrophe theory for solar eruptions indicates that loss of mechanical equilibrium constitutes the main trigger mechanism of major eruptions, usually shown up as solar flares,eruptive prominences, and coronal mass ejections (CMEs). Magnetic reconnection which takes place at the very beginning of the eruption as a result of plasma instabilities/turbulence inside the current sheet, converts magnetic energy into heating and kinetic energy that are responsible for solar flares, and for accelerating both plasma ejecta (flows and CMEs) and energetic particles. Various manifestations are thus related to one another, and the physics behind these relationships is catastrophe and magnetic reconnection. This work reports on recent progress in both theoretical research and observations on eruptive phenomena showing the above manifestations. We start by displaying the properties of large-scale structures in the corona and the related magnetic fields prior to an eruption, and show various morphological features of the disrupting magnetic fields. Then, in the framework of the catastrophe theory,we look into the physics behind those features investigated in a succession of previous works,and discuss the approaches they used.

  20. Large-scale mitochondrial DNA analysis of the domestic goat reveals six haplogroups with high diversity.

    Directory of Open Access Journals (Sweden)

    Saeid Naderi

    Full Text Available BACKGROUND: From the beginning of domestication, the transportation of domestic animals resulted in genetic and demographic processes that explain their present distribution and genetic structure. Thus studying the present genetic diversity helps to better understand the history of domestic species. METHODOLOGY/PRINCIPAL FINDINGS: The genetic diversity of domestic goats has been characterized with 2430 individuals from all over the old world, including 946 new individuals from regions poorly studied until now (mainly the Fertile Crescent. These individuals represented 1540 haplotypes for the HVI segment of the mitochondrial DNA (mtDNA control region. This large-scale study allowed the establishment of a clear nomenclature of the goat maternal haplogroups. Only five of the six previously defined groups of haplotypes were divergent enough to be considered as different haplogroups. Moreover a new mitochondrial group has been localized around the Fertile Crescent. All groups showed very high haplotype diversity. Most of this diversity was distributed among groups and within geographic regions. The weak geographic structure may result from the worldwide distribution of the dominant A haplogroup (more than 90% of the individuals. The large-scale distribution of other haplogroups (except one, may be related to human migration. The recent fragmentation of local goat populations into discrete breeds is not detectable with mitochondrial markers. The estimation of demographic parameters from mismatch analyses showed that all groups had a recent demographic expansion corresponding roughly to the period when domestication took place. But even with a large data set it remains difficult to give relative dates of expansion for different haplogroups because of large confidence intervals. CONCLUSIONS/SIGNIFICANCE: We propose standard criteria for the definition of the different haplogroups based on the result of mismatch analysis and on the use of sequences of

  1. scMRI reveals large-scale brain network abnormalities in autism.

    Directory of Open Access Journals (Sweden)

    Brandon A Zielinski

    Full Text Available Autism is a complex neurological condition characterized by childhood onset of dysfunction in multiple cognitive domains including socio-emotional function, speech and language, and processing of internally versus externally directed stimuli. Although gross brain anatomic differences in autism are well established, recent studies investigating regional differences in brain structure and function have yielded divergent and seemingly contradictory results. How regional abnormalities relate to the autistic phenotype remains unclear. We hypothesized that autism exhibits distinct perturbations in network-level brain architecture, and that cognitive dysfunction may be reflected by abnormal network structure. Network-level anatomic abnormalities in autism have not been previously described. We used structural covariance MRI to investigate network-level differences in gray matter structure within two large-scale networks strongly implicated in autism, the salience network and the default mode network, in autistic subjects and age-, gender-, and IQ-matched controls. We report specific perturbations in brain network architecture in the salience and default-mode networks consistent with clinical manifestations of autism. Extent and distribution of the salience network, involved in social-emotional regulation of environmental stimuli, is restricted in autism. In contrast, posterior elements of the default mode network have increased spatial distribution, suggesting a 'posteriorization' of this network. These findings are consistent with a network-based model of autism, and suggest a unifying interpretation of previous work. Moreover, we provide evidence of specific abnormalities in brain network architecture underlying autism that are quantifiable using standard clinical MRI.

  2. Diversity and relationships of cocirculating modern human rotaviruses revealed using large-scale comparative genomics.

    Science.gov (United States)

    McDonald, Sarah M; McKell, Allison O; Rippinger, Christine M; McAllen, John K; Akopov, Asmik; Kirkness, Ewen F; Payne, Daniel C; Edwards, Kathryn M; Chappell, James D; Patton, John T

    2012-09-01

    Group A rotaviruses (RVs) are 11-segmented, double-stranded RNA viruses and are primary causes of gastroenteritis in young children. Despite their medical relevance, the genetic diversity of modern human RVs is poorly understood, and the impact of vaccine use on circulating strains remains unknown. In this study, we report the complete genome sequence analysis of 58 RVs isolated from children with severe diarrhea and/or vomiting at Vanderbilt University Medical Center (VUMC) in Nashville, TN, during the years spanning community vaccine implementation (2005 to 2009). The RVs analyzed include 36 G1P[8], 18 G3P[8], and 4 G12P[8] Wa-like genogroup 1 strains with VP6-VP1-VP2-VP3-NSP1-NSP2-NSP3-NSP4-NSP5/6 genotype constellations of I1-R1-C1-M1-A1-N1-T1-E1-H1. By constructing phylogenetic trees, we identified 2 to 5 subgenotype alleles for each gene. The results show evidence of intragenogroup gene reassortment among the cocirculating strains. However, several isolates from different seasons maintained identical allele constellations, consistent with the notion that certain RV clades persisted in the community. By comparing the genes of VUMC RVs to those of other archival and contemporary RV strains for which sequences are available, we defined phylogenetic lineages and verified that the diversity of the strains analyzed in this study reflects that seen in other regions of the world. Importantly, the VP4 and VP7 proteins encoded by VUMC RVs and other contemporary strains show amino acid changes in or near neutralization domains, which might reflect antigenic drift of the virus. Thus, this large-scale, comparative genomic study of modern human RVs provides significant insight into how this pathogen evolves during its spread in the community. PMID:22696651

  3. Diversity and Relationships of Cocirculating Modern Human Rotaviruses Revealed Using Large-Scale Comparative Genomics

    Science.gov (United States)

    McKell, Allison O.; Rippinger, Christine M.; McAllen, John K.; Akopov, Asmik; Kirkness, Ewen F.; Payne, Daniel C.; Edwards, Kathryn M.; Chappell, James D.; Patton, John T.

    2012-01-01

    Group A rotaviruses (RVs) are 11-segmented, double-stranded RNA viruses and are primary causes of gastroenteritis in young children. Despite their medical relevance, the genetic diversity of modern human RVs is poorly understood, and the impact of vaccine use on circulating strains remains unknown. In this study, we report the complete genome sequence analysis of 58 RVs isolated from children with severe diarrhea and/or vomiting at Vanderbilt University Medical Center (VUMC) in Nashville, TN, during the years spanning community vaccine implementation (2005 to 2009). The RVs analyzed include 36 G1P[8], 18 G3P[8], and 4 G12P[8] Wa-like genogroup 1 strains with VP6-VP1-VP2-VP3-NSP1-NSP2-NSP3-NSP4-NSP5/6 genotype constellations of I1-R1-C1-M1-A1-N1-T1-E1-H1. By constructing phylogenetic trees, we identified 2 to 5 subgenotype alleles for each gene. The results show evidence of intragenogroup gene reassortment among the cocirculating strains. However, several isolates from different seasons maintained identical allele constellations, consistent with the notion that certain RV clades persisted in the community. By comparing the genes of VUMC RVs to those of other archival and contemporary RV strains for which sequences are available, we defined phylogenetic lineages and verified that the diversity of the strains analyzed in this study reflects that seen in other regions of the world. Importantly, the VP4 and VP7 proteins encoded by VUMC RVs and other contemporary strains show amino acid changes in or near neutralization domains, which might reflect antigenic drift of the virus. Thus, this large-scale, comparative genomic study of modern human RVs provides significant insight into how this pathogen evolves during its spread in the community. PMID:22696651

  4. Large-scale Models Reveal the Two-component Mechanics of Striated Muscle

    Directory of Open Access Journals (Sweden)

    Robert Jarosch

    2008-12-01

    Full Text Available This paper provides a comprehensive explanation of striated muscle mechanics and contraction on the basis of filament rotations. Helical proteins, particularly the coiled-coils of tropomyosin, myosin and α-actinin, shorten their H-bonds cooperatively and produce torque and filament rotations when the Coulombic net-charge repulsion of their highly charged side-chains is diminished by interaction with ions. The classical “two-component model” of active muscle differentiated a “contractile component” which stretches the “series elastic component” during force production. The contractile components are the helically shaped thin filaments of muscle that shorten the sarcomeres by clockwise drilling into the myosin cross-bridges with torque decrease (= force-deficit. Muscle stretch means drawing out the thin filament helices off the cross-bridges under passive counterclockwise rotation with torque increase (= stretch activation. Since each thin filament is anchored by four elastic α-actinin Z-filaments (provided with forceregulating sites for Ca2+ binding, the thin filament rotations change the torsional twist of the four Z-filaments as the “series elastic components”. Large scale models simulate the changes of structure and force in the Z-band by the different Z-filament twisting stages A, B, C, D, E, F and G. Stage D corresponds to the isometric state. The basic phenomena of muscle physiology, i. e. latency relaxation, Fenn-effect, the force-velocity relation, the length-tension relation, unexplained energy, shortening heat, the Huxley-Simmons phases, etc. are explained and interpreted with the help of the model experiments.

  5. Large-scale transcriptome analyses reveal new genetic marker candidates of head, neck, and thyroid cancer

    DEFF Research Database (Denmark)

    Reis, Eduardo M; Ojopi, Elida P B; Alberto, Fernando L;

    2005-01-01

    curation, pointing to 788 putatively new alternative splicing isoforms, the majority (75%) being insertion events. A subset of 34 new splicing isoforms (5% of 788 events) was selected and 23 (68%) were confirmed by reverse transcription-PCR and DNA sequencing. Putative new genes were revealed, including...... with detailed clinical data about tumor origin, the information reported here is now publicly available on a dedicated Web site as a resource for further biological investigation. This first in silico reconstruction of the head, neck, and thyroid transcriptomes points to a wealth of new candidate markers...

  6. Revealing the Large-Scale Structures of Interstellar Gas Associated with the Magellanic SNR N132D

    CERN Document Server

    Sano, H; Yoshiike, S; Fukuda, T; Tachihara, K; Inutsuka, S; Kawamura, A; Fujii, K; Mizuno, N; Inoue, T; Onishi, T; Acero, F; Vink, J

    2015-01-01

    We report preliminary results of large-scale distribution toward the Magellanic supernova remnant N132D using Mopra and Chandra archival datasets. We identified a cavity-like CO structure along the X-ray shell toward the southern half of it. The total mass of associating molecular gas is $\\sim10^4 M_\\odot$, which is smaller than the previous study by an order of magnitude. Further observations using ALMA, ASTE, and Mopra will reveal the detailed spatial structures and its physical conditions.

  7. Large-scale transcriptome data reveals transcriptional activity of fission yeast LTR retrotransposons

    Directory of Open Access Journals (Sweden)

    Willerslev Eske

    2010-03-01

    Full Text Available Abstract Background Retrotransposons are transposable elements that proliferate within eukaryotic genomes through a process involving reverse transcription. The numbers of retrotransposons within genomes and differences between closely related species may yield insight into the evolutionary history of the elements. Less is known about the ongoing dynamics of retrotransposons, as analysis of genome sequences will only reveal insertions of retrotransposons that are fixed - or near fixation - in the population or strain from which genetic material has been extracted for sequencing. One pre-requisite for retrotransposition is transcription of the elements. Given their intrinsic sequence redundancy, transcriptome-level analyses of transposable elements are scarce. We have used recently published transcriptome data from the fission yeast Schizosaccharomyces pombe to assess the ability to detect and describe transcriptional activity from Long Terminal Repeat (LTR retrotransposons. LTR retrotransposons are normally flanked by two LTR sequences. However, the majority of LTR sequences in S. pombe exist as solitary LTRs, i.e. as single terminal repeat sequences not flanking a retrotransposon. Transcriptional activity was analysed for both full-length LTR retrotransposons and solitary LTRs. Results Two independent sets of transcriptome data reveal the presence of full-length, polyadenylated transcripts from LTR retrotransposons in S. pombe during growth phase in rich medium. The redundancy of retrotransposon sequences makes it difficult to assess which elements are transcriptionally active, but data strongly indicates that only a subset of the LTR retrotransposons contribute significantly to the detected transcription. A considerable level of reverse strand transcription is also detected. Equal levels of transcriptional activity are observed from both strands of solitary LTR sequences. Transcriptome data collected during meiosis suggests that transcription

  8. Important aspects of Eastern Mediterranean large-scale variability revealed from data of three fixed observatories

    Science.gov (United States)

    Bensi, Manuel; Velaoras, Dimitris; Cardin, Vanessa; Perivoliotis, Leonidas; Pethiakis, George

    2015-04-01

    Long-term variations of temperature and salinity observed in the Adriatic and Aegean Seas seem to be regulated by larger-scale circulation modes of the Eastern Mediterranean (EMed) Sea, such as the recently discovered feedback mechanisms, namely the BiOS (Bimodal Oscillating System) and the internal thermohaline pump theories. These theories are the results of interpretation of many years' observations, highlighting possible interactions between two key regions of the EMed. Although repeated oceanographic cruises carried out in the past or planned for the future are a very useful tool for understanding the interaction between the two basins (e.g. alternating dense water formation, salt ingressions), recent long time-series of high frequency (up to 1h) sampling have added valuable information to the interpretation of internal mechanisms for both areas (i.e. mesoscale eddies, evolution of fast internal processes, etc.). During the last 10 years, three deep observatories were deployed and maintained in the Adriatic, Ionian, and Aegean Seas: they are respectively, the E2-M3A, the Pylos, and the E1-M3A. All are part of the largest European network of Fixed Point Open Ocean Observatories (FixO3, http://www.fixo3.eu/). Herein, from the analysis of temperature and salinity, and potential density time series collected at the three sites from the surface down to the intermediate and deep layers, we will discuss the almost perfect anti-correlated behavior between the Adriatic and the Aegean Seas. Our data, collected almost continuously since 2006, reveal that these observatories well represent the thermohaline variability of their own areas. Interestingly, temperature and salinity in the intermediate layer suddenly increased in the South Adriatic from the end of 2011, exactly when they started decreasing in the Aegean Sea. Moreover, Pylos data used together with additional ones (e.g. Absolute dynamic topography, temperature and salinity data from other platforms) collected

  9. Large scale RNAi reveals the requirement of nuclear envelope breakdown for nuclear import of human papillomaviruses.

    Directory of Open Access Journals (Sweden)

    Inci Aydin

    2014-05-01

    Full Text Available A two-step, high-throughput RNAi silencing screen was used to identify host cell factors required during human papillomavirus type 16 (HPV16 infection. Analysis of validated hits implicated a cluster of mitotic genes and revealed a previously undetermined mechanism for import of the viral DNA (vDNA into the nucleus. In interphase cells, viruses were endocytosed, routed to the perinuclear area, and uncoated, but the vDNA failed to be imported into the nucleus. Upon nuclear envelope perforation in interphase cells HPV16 infection occured. During mitosis, the vDNA and L2 associated with host cell chromatin on the metaphase plate. Hence, we propose that HPV16 requires nuclear envelope breakdown during mitosis for access of the vDNA to the nucleoplasm. The results accentuate the value of genes found by RNAi screens for investigation of viral infections. The list of cell functions required during HPV16 infection will, moreover, provide a resource for future virus-host cell interaction studies.

  10. Acoustic telemetry reveals large-scale migration patterns of walleye in Lake Huron

    Science.gov (United States)

    Hayden, Todd A.; Holbrook, Christopher; Fielder, David G.; Vandergoot, Christopher S.; Bergstedt, Roger A.; Dettmers, John M.; Krueger, Charles C.; Cooke, Steven J.

    2014-01-01

    Fish migration in large freshwater lacustrine systems such as the Laurentian Great Lakes is not well understood. The walleye (Sander vitreus) is an economically and ecologically important native fish species throughout the Great Lakes. In Lake Huron walleye has recently undergone a population expansion as a result of recovery of the primary stock, stemming from changing food web dynamics. During 2011 and 2012, we used acoustic telemetry to document the timing and spatial scale of walleye migration in Lake Huron and Saginaw Bay. Spawning walleye (n = 199) collected from a tributary of Saginaw Bay were implanted with acoustic tags and their migrations were documented using acoustic receivers (n = 140) deployed throughout U.S. nearshore waters of Lake Huron. Three migration pathways were described using multistate mark-recapture models. Models were evaluated using the Akaike Information Criterion. Fish sex did not influence migratory behavior but did affect migration rate and walleye were detected on all acoustic receiver lines. Most (95%) tagged fish migrated downstream from the riverine tagging and release location to Saginaw Bay, and 37% of these fish emigrated from Saginaw Bay into Lake Huron. Remarkably, 8% of walleye that emigrated from Saginaw Bay were detected at the acoustic receiver line located farthest from the release location more than 350 km away. Most (64%) walleye returned to the Saginaw River in 2012, presumably for spawning. Our findings reveal that fish from this stock use virtually the entirety of U.S. nearshore waters of Lake Huron.

  11. Global dynamic topography observations reveal limited influence of large-scale mantle flow

    Science.gov (United States)

    Hoggard, M. J.; White, N.; Al-Attar, D.

    2016-06-01

    Convective circulation of the Earth's mantle maintains some fraction of surface topography that varies with space and time. Most predictive models show that this dynamic topography has peak amplitudes of about +/-2 km, dominated by wavelengths of 104 km. Here, we test these models against our comprehensive observational database of 2,120 spot measurements of dynamic topography that were determined by analysing oceanic seismic surveys. These accurate measurements have typical peak amplitudes of +/-1 km and wavelengths of approximately 103 km, and are combined with limited continental constraints to generate a global spherical harmonic model, the robustness of which has been carefully tested and benchmarked. Our power spectral analysis reveals significant discrepancies between observed and predicted dynamic topography. At longer wavelengths (such as 104 km), observed dynamic topography has peak amplitudes of about +/-500 m. At shorter wavelengths (such as 103 km), significant dynamic topography is still observed. We show that these discrepancies can be explained if short-wavelength dynamic topography is generated by temperature-driven density anomalies within a sub-plate asthenospheric channel. Stratigraphic observations from adjacent continental margins show that these dynamic topographic signals evolve quickly with time. More rapid temporal and spatial changes in vertical displacement of the Earth's surface have direct consequences for fields as diverse as mantle flow, oceanic circulation and long-term climate change.

  12. Large-scale analysis by SAGE reveals new mechanisms of v-erbA oncogene action

    Directory of Open Access Journals (Sweden)

    Faure Claudine

    2007-10-01

    Full Text Available Abstract Background: The v-erbA oncogene, carried by the Avian Erythroblastosis Virus, derives from the c-erbAα proto-oncogene that encodes the nuclear receptor for triiodothyronine (T3R. v-ErbA transforms erythroid progenitors in vitro by blocking their differentiation, supposedly by interference with T3R and RAR (Retinoic Acid Receptor. However, v-ErbA target genes involved in its transforming activity still remain to be identified. Results: By using Serial Analysis of Gene Expression (SAGE, we identified 110 genes deregulated by v-ErbA and potentially implicated in the transformation process. Bioinformatic analysis of promoter sequence and transcriptional assays point out a potential role of c-Myb in the v-ErbA effect. Furthermore, grouping of newly identified target genes by function revealed both expected (chromatin/transcription and unexpected (protein metabolism functions potentially deregulated by v-ErbA. We then focused our study on 15 of the new v-ErbA target genes and demonstrated by real time PCR that in majority their expression was activated neither by T3, nor RA, nor during differentiation. This was unexpected based upon the previously known role of v-ErbA. Conclusion: This paper suggests the involvement of a wealth of new unanticipated mechanisms of v-ErbA action.

  13. The Dynamics of Sea Straits Reveals Large-Scale Modes of Variability

    Science.gov (United States)

    Rubino, Angelo; Androsov, Alexey; Zanchettin, Davide; Voltzinger, Naum

    2016-04-01

    Using a very high resolution 3D numerical model we investigate the tidal dynamics in the Strait of Messina. We show that different stratifications at the southern boundaries, consistent with observed stratifications in the Ionian approaches to the Strait, yield different mean sea level heights. On this basis we search for long-term variations in sea level heights measured in the tidal stations of Catania, Messina and Marseille, and associate them with the concomitant phase of dominant modes of interannual-to-decadal climate variability in the Euro-Mediterranean sector. We focus on the atmospheric North Atlantic Oscillation (NAO) and on the Adriatic-Ionian Bimodal Oscillating System (BiOS) to illustrate the grand variability in sea level teleconnections during the last four decades. In particular, observations reveal a strong imprint of both NAO and BiOS on all sea level records in the 21st century, when NAO and BiOS are inversely correlated. In the 1990s, a well known period of persistent positive NAO anomalies, the NAO imprint on sea level becomes weaker compared to the most recent period, although it remains clear on decadal trends, while the BiOS shows very weak positive variability. In the 1970s and early 1980s, when the NAO was on a neutral phase with weak variability, the NAO imprint on sea levels is weakest, and sea levels in Marseille and Sicily anticorrelate with each other, in contrast to the positive correlations found during the later periods. Based on these observational evidence, we discuss how our modeling results provide a basis to understand the local dynamics that contributed to determine such observed decadal variability.

  14. The impact of large scale biomass production on ozone air pollution in Europe

    NARCIS (Netherlands)

    Beltman, J.B.; Hendriks, C.; Tum, M.; Schaap, M.

    2013-01-01

    Tropospheric ozone contributes to the removal of air pollutants from the atmosphere but is itself a pollutant that is harmful to human health and vegetation. Biogenic isoprene emissions are important ozone precursors, and therefore future changes in land use that change isoprene emissions are likely

  15. The impact of large scale biomass production on ozone air pollution in Europe

    OpenAIRE

    Joost B Beltman; Hendriks, Carlijn; Tum, Markus; Schaap, Martijn

    2013-01-01

    Tropospheric ozone contributes to the removal of air pollutants from the atmosphere but is itself a pollutant that is harmful to human health and vegetation. Biogenic isoprene emissions are important ozone precursors, and therefore future changes in land use that change isoprene emissions are likely to affect atmospheric ozone concentrations. Here, we use the chemical transport model LOTOS-EUROS (dedicated to the regional modeling of trace gases in Europe) to study a scenario in which 5% of t...

  16. Large scale Tesla coil guided discharges initiated by femtosecond laser filamentation in air

    OpenAIRE

    Arantchouk, Leonid; Point, Guillaume; Brelet, Yohann; Prade, Bernard; Carbonnel, Jérôme; André, Yves-Bernard; Mysyrowicz, André; Houard, Aurélien

    2014-01-01

    The guiding of meter scale electric discharges produced in air by a Tesla coil is realized in laboratory using a focused terawatt laser pulse undergoing filamentation. The influence of the focus position, the laser arrival time or the gap length is studied to determine the best conditions for efficient laser guiding. Discharge parameters such as delay, jitter and resistance are characterized. An increase of the discharge length by a factor 5 has been achieved with the laser filaments, corresp...

  17. Microbial responses of forest soil to moderate anthropogenic air pollution - a large scale field survey

    International Nuclear Information System (INIS)

    There is a need to introduce soil microbiological methods into long term ecological monitoring programs. For this purpose we studied the impact of moderate anthropogenic air pollution in polluted and less polluted area districts, forest site types Calluna (CT), Vaccinium (VT) and Myrtillus (MT) and the amount of organic matter, measured as carbon content on the soil respiration activity and the ATP content. The main sources of local air pollutants (SO2 and NOx) in the polluted area district were from the capital region and an oil refinery. Humus (F/H-layer) and the underlying 0 to 5 cm mineral soil samples were collected from 193 study plots located in the 5300 km2 study area. We found that the soil respiration rate in humus layer samples was lower in the polluted area district compared to the less polluted one (16.0 and 19.5μL CO2 h-1g-1 dw, respectively), but the difference occurred only in the dry, coarse-textured CT forest site type. The mineral soil respiration rate and the mineral soil and humus layer ATP content were not affected by the air pollution. Most of the variations of the biological variables were explained primarily by the soil carbon content, secondly by the forest site type and thirdly by the area division. 38 refs., 1 fig., 6 tabs

  18. Large scale Tesla coil guided discharges initiated by femtosecond laser filamentation in air

    Science.gov (United States)

    Arantchouk, L.; Point, G.; Brelet, Y.; Prade, B.; Carbonnel, J.; André, Y.-B.; Mysyrowicz, A.; Houard, A.

    2014-07-01

    The guiding of meter scale electric discharges produced in air by a Tesla coil is realized in laboratory using a focused terawatt laser pulse undergoing filamentation. The influence of the focus position, the laser arrival time, or the gap length is studied to determine the best conditions for efficient laser guiding. Discharge parameters such as delay, jitter, and resistance are characterized. An increase of the discharge length by a factor 5 has been achieved with the laser filaments, corresponding to a mean breakdown field of 2 kV/cm for a 1.8 m gap length. Consecutive guided discharges at a repetition rate of 10 Hz are also reported.

  19. The iBeetle large-scale RNAi screen reveals gene functions for insect development and physiology.

    Science.gov (United States)

    Schmitt-Engel, Christian; Schultheis, Dorothea; Schwirz, Jonas; Ströhlein, Nadi; Troelenberg, Nicole; Majumdar, Upalparna; Dao, Van Anh; Grossmann, Daniela; Richter, Tobias; Tech, Maike; Dönitz, Jürgen; Gerischer, Lizzy; Theis, Mirko; Schild, Inga; Trauner, Jochen; Koniszewski, Nikolaus D B; Küster, Elke; Kittelmann, Sebastian; Hu, Yonggang; Lehmann, Sabrina; Siemanowski, Janna; Ulrich, Julia; Panfilio, Kristen A; Schröder, Reinhard; Morgenstern, Burkhard; Stanke, Mario; Buchhholz, Frank; Frasch, Manfred; Roth, Siegfried; Wimmer, Ernst A; Schoppmeier, Michael; Klingler, Martin; Bucher, Gregor

    2015-01-01

    Genetic screens are powerful tools to identify the genes required for a given biological process. However, for technical reasons, comprehensive screens have been restricted to very few model organisms. Therefore, although deep sequencing is revealing the genes of ever more insect species, the functional studies predominantly focus on candidate genes previously identified in Drosophila, which is biasing research towards conserved gene functions. RNAi screens in other organisms promise to reduce this bias. Here we present the results of the iBeetle screen, a large-scale, unbiased RNAi screen in the red flour beetle, Tribolium castaneum, which identifies gene functions in embryonic and postembryonic development, physiology and cell biology. The utility of Tribolium as a screening platform is demonstrated by the identification of genes involved in insect epithelial adhesion. This work transcends the restrictions of the candidate gene approach and opens fields of research not accessible in Drosophila.

  20. Large-scale cellular-resolution gene profiling in human neocortex reveals species-specific molecular signatures

    Science.gov (United States)

    Zeng, Hongkui; Shen, Elaine H.; Hohmann, John G.; Oh, Wook Seung; Bernard, Amy; Royall, Joshua J.; Glattfelder, Katie J.; Sunkin, Susan M.; Morris, John A.; Guillozet-Bongaarts, Angela L.; Smith, Kimberly A.; Ebbert, Amanda J.; Swanson, Beryl; Kuan, Leonard; Page, Damon T.; Overly, Caroline C.; Lein, Ed S.; Hawrylycz, Michael J.; Hof, Patrick R.; Hyde, Thomas M.; Kleinman, Joel E.; Jones, Allan R.

    2012-01-01

    Summary Although there have been major advances in elucidating the functional biology of the human brain, relatively little is known of its cellular and molecular organization. Here we report a large-scale characterization of the expression of ~1,000 genes important for neural functions, by in situ hybridization with cellular resolution in visual and temporal cortices of adult human brains. These data reveal diverse gene expression patterns and remarkable conservation of each individual gene’s expression among individuals (95%), cortical areas (84%), and between human and mouse (79%). A small but substantial number of genes (21%) exhibited species-differential expression. Distinct molecular signatures, comprised of genes both common between species and unique to each, were identified for each major cortical cell type. The data suggest that gene expression profile changes may contribute to differential cortical function across species, in particular, a shift from corticosubcortical to more predominant corticocortical communications in the human brain. PMID:22500809

  1. Systems Perturbation Analysis of a Large-Scale Signal Transduction Model Reveals Potentially Influential Candidates for Cancer Therapeutics

    Science.gov (United States)

    Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš

    2016-01-01

    Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis

  2. Large-scale expression analysis reveals distinct microRNA profiles at different stages of human neurodevelopment.

    Directory of Open Access Journals (Sweden)

    Brandon Smith

    Full Text Available BACKGROUND: MicroRNAs (miRNAs are short non-coding RNAs predicted to regulate one third of protein coding genes via mRNA targeting. In conjunction with key transcription factors, such as the repressor REST (RE1 silencing transcription factor, miRNAs play crucial roles in neurogenesis, which requires a highly orchestrated program of gene expression to ensure the appropriate development and function of diverse neural cell types. Whilst previous studies have highlighted select groups of miRNAs during neural development, there remains a need for amenable models in which miRNA expression and function can be analyzed over the duration of neurogenesis. PRINCIPAL FINDINGS: We performed large-scale expression profiling of miRNAs in human NTera2/D1 (NT2 cells during retinoic acid (RA-induced transition from progenitors to fully differentiated neural phenotypes. Our results revealed dynamic changes of miRNA patterns, resulting in distinct miRNA subsets that could be linked to specific neurodevelopmental stages. Moreover, the cell-type specific miRNA subsets were very similar in NT2-derived differentiated cells and human primary neurons and astrocytes. Further analysis identified miRNAs as putative regulators of REST, as well as candidate miRNAs targeted by REST. Finally, we confirmed the existence of two predicted miRNAs; pred-MIR191 and pred-MIR222 associated with SLAIN1 and FOXP2, respectively, and provided some evidence of their potential co-regulation. CONCLUSIONS: In the present study, we demonstrate that regulation of miRNAs occurs in precise patterns indicative of their roles in cell fate commitment, progenitor expansion and differentiation into neurons and glia. Furthermore, the similarity between our NT2 system and primary human cells suggests their roles in molecular pathways critical for human in vivo neurogenesis.

  3. Captured metagenomics: large-scale targeting of genes based on ‘sequence capture’ reveals functional diversity in soils

    OpenAIRE

    Manoharan, Lokeshwaran; Kushwaha, Sandeep K; Hedlund, Katarina; Ahrén, Dag

    2015-01-01

    Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agric...

  4. Large-Scale Disasters

    Science.gov (United States)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  5. Analysis and experimental study on formation conditions of large-scale barrier-free diffuse atmospheric pressure air plasmas in repetitive pulse mode

    Science.gov (United States)

    Li, Lee; Liu, Lun; Liu, Yun-Long; Bin, Yu; Ge, Ya-Feng; Lin, Fo-Chang

    2014-01-01

    Atmospheric air diffuse plasmas have enormous application potential in various fields of science and technology. Without dielectric barrier, generating large-scale air diffuse plasmas is always a challenging issue. This paper discusses and analyses the formation mechanism of cold homogenous plasma. It is proposed that generating stable diffuse atmospheric plasmas in open air should meet the three conditions: high transient power with low average power, excitation in low average E-field with locally high E-field region, and multiple overlapping electron avalanches. Accordingly, an experimental configuration of generating large-scale barrier-free diffuse air plasmas is designed. Based on runaway electron theory, a low duty-ratio, high voltage repetitive nanosecond pulse generator is chosen as a discharge excitation source. Using the wire-electrodes with small curvature radius, the gaps with highly non-uniform E-field are structured. Experimental results show that the volume-scaleable, barrier-free, homogeneous air non-thermal plasmas have been obtained between the gap spacing with the copper-wire electrodes. The area of air cold plasmas has been up to hundreds of square centimeters. The proposed formation conditions of large-scale barrier-free diffuse air plasmas are proved to be reasonable and feasible.

  6. Captured metagenomics: large-scale targeting of genes based on 'sequence capture' reveals functional diversity in soils.

    Science.gov (United States)

    Manoharan, Lokeshwaran; Kushwaha, Sandeep K; Hedlund, Katarina; Ahrén, Dag

    2015-12-01

    Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances.

  7. Genome resequencing in Populus: Revealing large-scale genome variation and implications on specialized-trait genomics

    Energy Technology Data Exchange (ETDEWEB)

    Muchero, Wellington [ORNL; Labbe, Jessy L [ORNL; Priya, Ranjan [University of Tennessee, Knoxville (UTK); DiFazio, Steven P [West Virginia University, Morgantown; Tuskan, Gerald A [ORNL

    2014-01-01

    To date, Populus ranks among a few plant species with a complete genome sequence and other highly developed genomic resources. With the first genome sequence among all tree species, Populus has been adopted as a suitable model organism for genomic studies in trees. However, far from being just a model species, Populus is a key renewable economic resource that plays a significant role in providing raw materials for the biofuel and pulp and paper industries. Therefore, aside from leading frontiers of basic tree molecular biology and ecological research, Populus leads frontiers in addressing global economic challenges related to fuel and fiber production. The latter fact suggests that research aimed at improving quality and quantity of Populus as a raw material will likely drive the pursuit of more targeted and deeper research in order to unlock the economic potential tied in molecular biology processes that drive this tree species. Advances in genome sequence-driven technologies, such as resequencing individual genotypes, which in turn facilitates large scale SNP discovery and identification of large scale polymorphisms are key determinants of future success in these initiatives. In this treatise we discuss implications of genome sequence-enable technologies on Populus genomic and genetic studies of complex and specialized-traits.

  8. The effect of vent size and congestion in large-scale vented natural gas/air explosions

    OpenAIRE

    Tomlin, GB; Johnson, DM; Cronin, P; Phylaktou, HN; Andrews, GE

    2015-01-01

    A typical building consists of a number of rooms; often with windows of different size and failure pressure and obstructions in the form of furniture and décor, separated by partition walls with interconnecting doorways. Consequently, the maximum pressure developed in a gas explosion would be dependent upon the individual characteristics of the building. In this research, a large-scale experimental programme has been undertaken at the DNV GL Spadeadam Test Site to determine the effects of ven...

  9. Large-scale network analysis of imagination reveals extended but limited top-down components in human visual cognition.

    Directory of Open Access Journals (Sweden)

    Verkhlyutov V.M.

    2014-12-01

    Full Text Available We investigated whole-brain functional magnetic resonance imaging (fMRI activation in a group of 21 healthy adult subjects during perception, imagination and remembering of two dynamic visual scenarios. Activation of the posterior parts of the cortex prevailed when watching videos. The cognitive tasks of imagination and remembering were accompanied by a predominant activity in the anterior parts of the cortex. An independent component analysis identified seven large-scale cortical networks with relatively invariant spatial distributions across all experimental conditions. The time course of their activation over experimental sessions was task-dependent. These detected networks can be interpreted as a recombination of resting state networks. Both central and peripheral networks were identified within the primary visual cortex. The central network around the caudal pole of BA17 and centers of other visual areas was activated only by direct visual stimulation, while the peripheral network responded to the presentation of visual information as well as to the cognitive tasks of imagination and remembering. The latter result explains the particular susceptibility of peripheral and twilight vision to cognitive top-down influences that often result in false-alarm detections.

  10. Airborne observations of large scale accumulations of air traffic emissions in the North Atlantic flight corridor within a stagnant anticyclone

    Energy Technology Data Exchange (ETDEWEB)

    Schlager, H.; Schulte, P.; Ziereis, H.; Schumann, U. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere; Arnold, F. [Max-Planck-Inst. fuer Kernphysik, Heidelberg (Germany); Ovarlez, J. [Centre National de la Recherche Scientifique, 91 - Palaiseau (France). Lab. de Meteorologie; Velthoven, P. van [Koninklijk Nederlands Meteorologisch Inst., De Bilt (Netherlands)

    1997-12-31

    Vertical and horizontal trace gas distributions were measured west of Ireland during a 7-day period in June 1995 within an extended stagnant anticyclone located in the North Atlantic flight corridor. Four subsequent flights (21, 24, 26, 28 June) with the DLR Falcon research aircraft were performed including observations of NO, O{sub 3}, HNO{sub 3}, CO{sub 2}, and meteorological parameters. NO volume mixing ratios in the upper troposphere from vertical profile measurements averaged over the corridor height range increased by 74 pptv (60%) during the observation period. Averaged NO concentrations measured along constant level flight legs at 10.7 km from 50 to 54 deg N increased by 87 pptv. Corresponding simulations with a 3-d chemistry transport model of KNMI reveal similar increases of NO{sub x} for the measuring area over the 7-day period for a model run with air traffic emissions which is not obtained for a run with no air traffic NO{sub x}. (author) 15 refs.

  11. Targeted Sequencing Reveals Large-Scale Sequence Polymorphism in Maize Candidate Genes for Biomass Production and Composition.

    Directory of Open Access Journals (Sweden)

    Moses M Muraya

    Full Text Available A major goal of maize genomic research is to identify sequence polymorphisms responsible for phenotypic variation in traits of economic importance. Large-scale detection of sequence variation is critical for linking genes, or genomic regions, to phenotypes. However, due to its size and complexity, it remains expensive to generate whole genome sequences of sufficient coverage for divergent maize lines, even with access to next generation sequencing (NGS technology. Because methods involving reduction of genome complexity, such as genotyping-by-sequencing (GBS, assess only a limited fraction of sequence variation, targeted sequencing of selected genomic loci offers an attractive alternative. We therefore designed a sequence capture assay to target 29 Mb genomic regions and surveyed a total of 4,648 genes possibly affecting biomass production in 21 diverse inbred maize lines (7 flints, 14 dents. Captured and enriched genomic DNA was sequenced using the 454 NGS platform to 19.6-fold average depth coverage, and a broad evaluation of read alignment and variant calling methods was performed to select optimal procedures for variant discovery. Sequence alignment with the B73 reference and de novo assembly identified 383,145 putative single nucleotide polymorphisms (SNPs, of which 42,685 were non-synonymous alterations and 7,139 caused frameshifts. Presence/absence variation (PAV of genes was also detected. We found that substantial sequence variation exists among genomic regions targeted in this study, which was particularly evident within coding regions. This diversification has the potential to broaden functional diversity and generate phenotypic variation that may lead to new adaptations and the modification of important agronomic traits. Further, annotated SNPs identified here will serve as useful genetic tools and as candidates in searches for phenotype-altering DNA variation. In summary, we demonstrated that sequencing of captured DNA is a powerful

  12. Large scale analysis of pediatric antiviral CD8+ T cell populations reveals sustained, functional and mature responses

    Directory of Open Access Journals (Sweden)

    Northfield John

    2006-12-01

    Full Text Available Abstract Background Cellular immunity plays a crucial role in cytomegalovirus (CMV infection and substantial populations of CMV-specific T cells accumulate throughout life. However, although CMV infection occurs during childhood, relatively little is know about the typical quantity and quality of T cell responses in pediatric populations. Methods One thousand and thirty-six people (Male/Female = 594/442, Age: 0–19 yr.; 959 subjects, 20–29 yr.; 77 subjects were examined for HLA typing. All of 1036 subjects were tested for HLA-A2 antigen. Of 1036 subjects, 887 were also tested for HLA-A23, 24 antigens. In addition, 50 elderly people (Male/Female = 11/39, Age: 60–92 yr. were also tested for HLA-A2 antigen. We analyzed the CD8+ T cell responses to CMV, comparing these to responses in children and young. The frequencies, phenotype and function CD8+ T cells for two imunodominant epitopes from pp65 were measured. Results We observed consistently high frequency and phenotypically "mature" (CD27 low, CD28 low, CD45RA+ CMV-specific CD8+ T cell responses in children, including those studied in the first year of life. These CD8+ T cells retained functionality across all age groups, and showed evidence of memory "inflation" only in later adult life. Conclusion CMV consistently elicits a very strong CD8+ T cell response in infants and large pools of CMV specific CD8+ T cells are maintained throughout childhood. The presence of CMV may considerably mould the CD8+ T cell compartment over time, but the relative frequencies of CMV-specific cells do not show the evidence of a population-level increase during childhood and adulthood. This contrast with the marked expansion ("inflation" of such CD8+ T cells in older adults. This study indicates that large scale analysis of peptide specific T cell responses in infants is readily possible. The robust nature of the responses observed suggests vaccine strategies aimed at priming and boosting CD8+ T cells against

  13. 中国大规模空分研究进展%Research and development of large-scale cryogenic air separation in China

    Institute of Scientific and Technical Information of China (English)

    Xiao-bin ZHANG; Jian-ye CHEN; Lei YAO; Yong-hua HUANG; Xue-jun ZHANG; Li-min QIU

    2014-01-01

    中国钢铁、化工等行业对工业气体需求的快速增长,极大地促进了大规模空分技术的发展。目前,中国生产的单套空分设备最大氧产量达到了120000 Nm3/h,单位能耗为0.38 kWh/m3。本文介绍了中国大规模空分技术发展水平,回顾了中国大规模空分工业的发展历史。以成熟的60000 Nm3/h空分系统为例,详细介绍了分子筛吸附器、空压机、板翅换热器、透平膨胀机和精馏塔等主要设备的技术参数。然后介绍了中国正在发展中的80000~120000 Nm3/h空分工艺和装备。最后讨论了中国大规模空分技术存在的主要问题及未来发展方向。%With the rapid growth in demand for industrial gas in steel and chemical industries, there has been significant em-phasis placed on the development of China’s large-scale air separation technology. Currently, the maximum capacity of a single unit has been able to attain 120 000 Nm3/h (oxygen), and the specific power consumption of 0.38 kWh/m3. This paper reviews the current state-of-the-art for large-scale cryogenic air separation systems being deployed in China. A brief introduction to the history and establishment of the large-scale cryogenic air separation industry is presented. Taking the present mainstream large-scale air separation unit operating at 60 000 Nm3/h (oxygen) as an example, the technological parameters and features of the involved key equipment, including a molecular sieve adsorber, air compressor unit, plate-fin heat exchanger, turbo-expander and distillation column are described in detail. The developing 80 000-120 000 Nm3/h air separation processes and equipment are also introduced. A summary of the existing problems and future developments of these systems in China are discussed.

  14. Large-scale profiling of signalling pathways reveals an asthma specific signature in bronchial smooth muscle cells

    Science.gov (United States)

    Alexandrova, Elena; Nassa, Giovanni; Corleone, Giacomo; Buzdin, Anton; Aliper, Alexander M.; Terekhanova, Nadezhda; Shepelin, Denis; Zhavoronkov, Alexander; Tamm, Michael; Milanesi, Luciano; Weisz, Alessandro

    2016-01-01

    Background Bronchial smooth muscle (BSM) cells from asthmatic patients maintain in vitro a distinct hyper-reactive (“primed”) phenotype, characterized by increased release of pro-inflammatory factors and mediators, as well as hyperplasia and/or hypertrophy. This “primed” phenotype helps to understand pathogenesis of asthma, as changes in BSM function are essential for manifestation of allergic and inflammatory responses and airway wall remodelling. Objective To identify signalling pathways in cultured primary BSMs of asthma patients and non-asthmatic subjects by genome wide profiling of differentially expressed mRNAs and activated intracellular signalling pathways (ISPs). Methods Transcriptome profiling by cap-analysis-of-gene-expression (CAGE), which permits selection of preferentially capped mRNAs most likely to be translated into proteins, was performed in human BSM cells from asthmatic (n=8) and non-asthmatic (n=6) subjects and OncoFinder tool were then exploited for identification of ISP deregulations. Results CAGE revealed >600 RNAs differentially expressed in asthma vs control cells (p≤0.005), with asthma samples showing a high degree of similarity among them. Comprehensive ISP activation analysis revealed that among 269 pathways analysed, 145 (ppromoting pathways and up-regulated ones affecting cell growth and proliferation, inflammatory response, control of smooth muscle contraction and hypoxia-related signalization. Conclusions These first-time results can now be exploited toward development of novel therapeutic strategies targeting ISP signatures linked to asthma pathophysiology. PMID:26863634

  15. Satellite chlorophyll fluorescence measurements reveal large-scale decoupling of photosynthesis and greenness dynamics in boreal evergreen forests.

    Science.gov (United States)

    Walther, Sophia; Voigt, Maximilian; Thum, Tea; Gonsamo, Alemu; Zhang, Yongguang; Köhler, Philipp; Jung, Martin; Varlagin, Andrej; Guanter, Luis

    2016-09-01

    Mid-to-high latitude forests play an important role in the terrestrial carbon cycle, but the representation of photosynthesis in boreal forests by current modelling and observational methods is still challenging. In particular, the applicability of existing satellite-based proxies of greenness to indicate photosynthetic activity is hindered by small annual changes in green biomass of the often evergreen tree population and by the confounding effects of background materials such as snow. As an alternative, satellite measurements of sun-induced chlorophyll fluorescence (SIF) can be used as a direct proxy of photosynthetic activity. In this study, the start and end of the photosynthetically active season of the main boreal forests are analysed using spaceborne SIF measurements retrieved from the GOME-2 instrument and compared to that of green biomass, proxied by vegetation indices including the Enhanced Vegetation Index (EVI) derived from MODIS data. We find that photosynthesis and greenness show a similar seasonality in deciduous forests. In high-latitude evergreen needleleaf forests, however, the length of the photosynthetically active period indicated by SIF is up to 6 weeks longer than the green biomass changing period proxied by EVI, with SIF showing a start-of-season of approximately 1 month earlier than EVI. On average, the photosynthetic spring recovery as signalled by SIF occurs as soon as air temperatures exceed the freezing point (2-3 °C) and when the snow on the ground has not yet completely melted. These findings are supported by model data of gross primary production and a number of other studies which evaluated in situ observations of CO2 fluxes, meteorology and the physiological state of the needles. Our results demonstrate the sensitivity of space-based SIF measurements to light-use efficiency of boreal forests and their potential for an unbiased detection of photosynthetic activity even under the challenging conditions interposed by evergreen

  16. Laminar and dorsoventral molecular organization of the medial entorhinal cortex revealed by large-scale anatomical analysis of gene expression.

    Directory of Open Access Journals (Sweden)

    Helen L Ramsden

    2015-01-01

    Full Text Available Neural circuits in the medial entorhinal cortex (MEC encode an animal's position and orientation in space. Within the MEC spatial representations, including grid and directional firing fields, have a laminar and dorsoventral organization that corresponds to a similar topography of neuronal connectivity and cellular properties. Yet, in part due to the challenges of integrating anatomical data at the resolution of cortical layers and borders, we know little about the molecular components underlying this organization. To address this we develop a new computational pipeline for high-throughput analysis and comparison of in situ hybridization (ISH images at laminar resolution. We apply this pipeline to ISH data for over 16,000 genes in the Allen Brain Atlas and validate our analysis with RNA sequencing of MEC tissue from adult mice. We find that differential gene expression delineates the borders of the MEC with neighboring brain structures and reveals its laminar and dorsoventral organization. We propose a new molecular basis for distinguishing the deep layers of the MEC and show that their similarity to corresponding layers of neocortex is greater than that of superficial layers. Our analysis identifies ion channel-, cell adhesion- and synapse-related genes as candidates for functional differentiation of MEC layers and for encoding of spatial information at different scales along the dorsoventral axis of the MEC. We also reveal laminar organization of genes related to disease pathology and suggest that a high metabolic demand predisposes layer II to neurodegenerative pathology. In principle, our computational pipeline can be applied to high-throughput analysis of many forms of neuroanatomical data. Our results support the hypothesis that differences in gene expression contribute to functional specialization of superficial layers of the MEC and dorsoventral organization of the scale of spatial representations.

  17. Large-Scale Sidereal Anisotropy of Galactic Cosmic-Ray Intensity Observed by the Tibet Air Shower Array

    CERN Document Server

    Amenomori, M; Cui, S W; Danzengluobu; Ding, L K; Ding, X H; Feng Cun Feng; Feng, Z Y; Gao, X Y; Geng, Q X; Guo, H W; He, H H; He, M; Hibino, K; Hotta, N; Haibing, H; Hu, H B; Huang, J; Huang, Q; Jia, H Y; Kajino, F; Kasahara, K; Katayose, Y; Kato, C; Kawata, K; Labaciren; Le, G M; Li, J Y; Lü, H; Lu, S L; Meng, X R; Mizutani, K; Mu, J; Munakata, K; Nagai, A; Nanjo, H; Nishizawa, M; Ohnishi, M; Ohta, I; Onuma, H; Ouchi, T; Ozawa, S; Ren, J R; Saitô, T; Sakata, M; Sasaki, T; Shibata, M; Shiomi, A; Shirai, T; Sugimoto, H; Takashima, M; Takita, M; Tan, Y H; Tateyama, N; Torii, S; Tsuchiya, H; Udo, S; Utsugi, T; Wang, H; Wang, X; Wang, Y G; Wu, H R; Xue Liang; Yamamoto, Y; Yan, C T; Yang, X C; Yasue, S I; Ye, Z H; Yu, G C; Yuan, A F; Yuda, T; Zhang, H M; Zhang, J L; Zhang, N J; Zhang, X Y; Yi Zhang; Zhang, Y; Zhaxi Sang Zhu; Zhou, X X

    2005-01-01

    We present the large-scale sidereal anisotropy ofgalactic cosmic-ray intensity in the multi-TeV region observed with the Tibet-IIIair shower array during the period from 1999 through 2003. The sidereal daily variation of cosmic rays observed in this experiment shows an excess of relative intensity around $4\\sim7 $ hours local sidereal time, as well as a deficit around 12 hours local sidereal time. While the amplitude of the excess is not significant when averaged over all declinations, the excess in individual declinaton bands becomes larger and clearer as the viewing direction moves toward the south. The maximum phase of the excess intensity changes from $\\sim$7 at the northern hemisphere to $\\sim$4 hours at the equatorial region. We also show that both the amplitude and the phase of the first harmonic vector of the daily variation are remarkably independent of primary energy in the multi-TeV region. This is the first result determining the energy and declination dependences of the full 24-hour profiles of t...

  18. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework

    NARCIS (Netherlands)

    Akita, Yasuyuki; Baldasano, Jose M.; Beelen, Rob; Cirach, Marta; De Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L.; De Nazelle, Audrey

    2014-01-01

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also

  19. Genome-wide DNA methylation analysis of neuroblastic tumors reveals clinically relevant epigenetic events and large-scale epigenomic alterations localized to telomeric regions.

    Science.gov (United States)

    Buckley, Patrick G; Das, Sudipto; Bryan, Kenneth; Watters, Karen M; Alcock, Leah; Koster, Jan; Versteeg, Rogier; Stallings, Raymond L

    2011-05-15

    The downregulation of specific genes through DNA hypermethylation is a major hallmark of cancer, although the extent and genomic distribution of hypermethylation occurring within cancer genomes is poorly understood. We report on the first genome-wide analysis of DNA methylation alterations in different neuroblastic tumor subtypes and cell lines, revealing higher order organization and clinically relevant alterations of the epigenome. The methylation status of 33,485 discrete loci representing all annotated CpG islands and RefSeq gene promoters was assessed in primary neuroblastic tumors and cell lines. A comparison of genes that were hypermethylated exclusively in the clinically favorable ganglioneuroma/ganglioneuroblastoma tumors revealed that nine genes were associated with poor clinical outcome when overexpressed in the unfavorable neuroblastoma (NB) tumors. Moreover, an integrated DNA methylation and copy number analysis identified 80 genes that were recurrently concomitantly deleted and hypermethylated in NB, with 37 reactivated by 5-aza-deoxycytidine. Lower expression of four of these genes was correlated with poor clinical outcome, further implicating their inactivation in aggressive disease pathogenesis. Analysis of genome-wide hypermethylation patterns revealed 70 recurrent large-scale blocks of contiguously hypermethylated promoters/CpG islands, up to 590 kb in length, with a distribution bias toward telomeric regions. Genome-wide hypermethylation events in neuroblastic tumors are extensive and frequently occur in large-scale blocks with a significant bias toward telomeric regions, indicating that some methylation alterations have occurred in a coordinated manner. Our results indicate that methylation contributes toward the clinicopathological features of neuroblastic tumors, revealing numerous genes associated with poor patient survival in NB.

  20. Interaction of a light gas stratified layer with an air jet coming from below : large scale experiments and scaling issues

    International Nuclear Information System (INIS)

    In the frame of the OECD/SETH-2 project, an experimental program is being conducted in parallel in the PANDA facility at Paul Scherrer Institute (Switzerland) and in the MISTRA facility at the Commissariat a l'Energie Atomique (France). The main objective of the programme is to generate high-quality experimental database for validating 3D Computational Fluid Dynamics codes. Part of the program focuses on gas stratification break-up induced by mass sources. Similar tests have been performed in both facilities PANDA and MISTRA. The idea behind was to address the possible scaling effect of the phenomena involved in the erosion of a stratified layer of Helium / Air mixture (40:60 in vol pc) located at the top of the facility by an air jet coming from below. Depending on interaction Froude number, different regimes have been recorded including pure diffusive mixing, global dilution and slow erosion processes. Regarding the time scale, small interaction Froude number leads to mixing process driven by molecular diffusion. When the interaction Froude number is increased to large values, the dilution process can be described by a global time scale based on volumetric mixing provided that the air entrainment by the jet is taken into account. The intermediate case with two layers is more complicated and a single time scale cannot be derived. These tests results can be regarded as a good basis for CFD models verification. (authors)

  1. Large Scale Variability of Mid-Tropospheric Carbon Dioxide as Observed by the Atmospheric Infrared Sounder (AIRS) on the NASA EOS Aqua Platform

    Science.gov (United States)

    Pagano, Thomas S.; Olsen, Edward T.

    2012-01-01

    The Atmospheric Infrared Sounder (AIRS) is a hyperspectral infrared instrument on the EOS Aqua Spacecraft, launched on May 4, 2002. AIRS has 2378 infrared channels ranging from 3.7 microns to 15.4 microns and a 13.5 km footprint. AIRS, in conjunction with the Advanced Microwave Sounding Unit (AMSU), produces temperature profiles with 1K/km accuracy, water vapor profiles (20%/2km), infrared cloud height and fraction, and trace gas amounts for CO2, CO, SO2, O3 and CH4 in the mid to upper troposphere. AIRS wide swath(cedilla) +/-49.5 deg , enables daily global daily coverage for over 95% of the Earth's surface. AIRS data are used for weather forecasting, validating climate model distribution and processes, and observing long-range transport of greenhouse gases. In this study, we examine the large scale and regional horizontal variability in the AIRS Mid-tropospheric Carbon Dioxide product as a function of season and associate the observed variability with known atmospheric transport processes, and sources and sinks of CO2.

  2. Interaction of a light gas stratified layer with an air jet coming from below: Large scale experiments and scaling issues

    Energy Technology Data Exchange (ETDEWEB)

    Studer, E., E-mail: etienne.studer@cea.fr [CEA/DEN/DANS/DM2S/SFME/LTMF 91191 Gif-sur-Yvette (France); Brinster, J.; Tkatschenko, I. [CEA/DEN/DANS/DM2S/SFME/LEEF 91191 Gif-sur-Yvette (France); Mignot, G.; Paladino, D.; Andreani, M. [Thermal-hydraulics Laboratory Paul Scherrer Institute CH-5232 Villigen (Switzerland)

    2012-12-15

    In the frame of the OECD/SETH-2 project, an experimental programme is being conducted in parallel in the PANDA facility at Paul Scherrer Institute and in the MISTRA facility at the Commissariat a l'Energie Atomique. Part of the programme focuses on gas stratification break-up induced by mass sources and similar tests have been performed in both facilities. Indeed, the scaling effect of the phenomena involved in the erosion of a gas stratified layer can be addressed. Depending on the interaction Froude number, different regimes have been identified including pure diffusive mixing, global dilution or slow erosion processes. These phenomena are driven by different time scales. Small value of the non-dimensional number leads to mixing process driven by molecular diffusion. When the interaction Froude number is increased to large values, the dilution process can be described by a global time scale based on volumetric mixing provided that the air entrainment by the jet is taken into account. The intermediate case with two layers is more complicated and a single time scale cannot be derived. These test results with high-quality measurements can be regarded as a good basis for CFD models verification.

  3. Generation of large-scale, barrier-free diffuse plasmas in air at atmospheric pressure using array wire electrodes and nanosecond high-voltage pulses

    Science.gov (United States)

    Teng, Yun; Li, Lee; Liu, Yun-Long; Liu, Lun; Liu, Minghai

    2014-10-01

    This paper introduces a method to generate large-scale diffuse plasmas by using a repetition nanosecond pulse generator and a parallel array wire-electrode configuration. We investigated barrier-free diffuse plasmas produced in the open air in parallel and cross-parallel array line-line electrode configurations. We found that, when the distance between the wire-electrode pair is small, the discharges were almost extinguished. Also, glow-like diffuse plasmas with little discharge weakening were obtained in an appropriate range of line-line distances and with a cathode-grounding cross-electrode configuration. As an example, we produced a large-scale, stable diffuse plasma with volumes as large as 18 × 15 × 15 cm3, and this discharge region can be further expanded. Additionally, using optical and electrical measurements, we showed that the electron temperature was higher than the gas temperature, which was almost the same as room temperature. Also, an array of electrode configuration with more wire electrodes had helped to prevent the transition from diffuse discharge to arc discharge. Comparing the current waveforms of configurations with 1 cell and 9 cells, we found that adding cells significantly increased the conduction current and the electrical energy delivered in the electrode gaps.

  4. A large-scale identification of direct targets of the tomato MADS box transcription factor RIPENING INHIBITOR reveals the regulation of fruit ripening.

    Science.gov (United States)

    Fujisawa, Masaki; Nakano, Toshitsugu; Shima, Yoko; Ito, Yasuhiro

    2013-02-01

    The fruit ripening developmental program is specific to plants bearing fleshy fruits and dramatically changes fruit characteristics, including color, aroma, and texture. The tomato (Solanum lycopersicum) MADS box transcription factor RIPENING INHIBITOR (RIN), one of the earliest acting ripening regulators, is required for both ethylene-dependent and -independent ripening regulatory pathways. Recent studies have identified two dozen direct RIN targets, but many more RIN targets remain to be identified. Here, we report the large-scale identification of direct RIN targets by chromatin immunoprecipitation coupled with DNA microarray analysis (ChIP-chip) targeting the predicted promoters of tomato genes. Our combined ChIP-chip and transcriptome analysis identified 241 direct RIN target genes that contain a RIN binding site and exhibit RIN-dependent positive or negative regulation during fruit ripening, suggesting that RIN has both activator and repressor roles. Examination of the predicted functions of RIN targets revealed that RIN participates in the regulation of lycopene accumulation, ethylene production, chlorophyll degradation, and many other physiological processes. Analysis of the effect of ethylene using 1-methylcyclopropene revealed that the positively regulated subset of RIN targets includes ethylene-sensitive and -insensitive transcription factors. Intriguingly, ethylene is involved in the upregulation of RIN expression during ripening. These results suggest that tomato fruit ripening is regulated by the interaction between RIN and ethylene signaling.

  5. A Revised Method of Presenting Wavenumber-Frequency Power Spectrum Diagrams That Reveals the Asymmetric Nature of Tropical Large-scale Waves

    Science.gov (United States)

    Chao, Winston C.; Yang, Bo; Fu, Xiouhua

    2007-01-01

    The popular method of presenting wavenumber-frequency power spectrum diagrams for studying tropical large-scale waves in the literature is shown to give an incomplete presentation of these waves. The so-called "convectively-coupled Kelvin (mixed Rossby-gravity) waves" are presented as existing only in the symmetric (antisymmetric) component of the diagrams. This is obviously not consistent with the published composite/regression studies of "convectively-coupled Kelvin waves," which illustrate the asymmetric nature of these waves. The cause of this inconsistency is revealed in this note and a revised method of presenting the power spectrum diagrams is proposed. When this revised method is used, "convectively-coupled Kelvin waves" do show anti-symmetric components, and "convectively-coupled mixed Rossby-gravity waves (also known as Yanai waves)" do show a hint of symmetric components. These results bolster a published proposal that these waves be called "chimeric Kelvin waves," "chimeric mixed Rossby-gravity waves," etc. This revised method of presenting power spectrum diagrams offers a more rigorous means of comparing the General Circulation Models (GCM) output with observations by calling attention to the capability of GCMs in correctly simulating the asymmetric characteristics of the equatorial waves.

  6. Sound to language: different cortical processing for first and second languages in elementary school children as revealed by a large-scale study using fNIRS.

    Science.gov (United States)

    Sugiura, Lisa; Ojima, Shiro; Matsuba-Kurita, Hiroko; Dan, Ippeita; Tsuzuki, Daisuke; Katura, Takusige; Hagiwara, Hiroko

    2011-10-01

    A large-scale study of 484 elementary school children (6-10 years) performing word repetition tasks in their native language (L1-Japanese) and a second language (L2-English) was conducted using functional near-infrared spectroscopy. Three factors presumably associated with cortical activation, language (L1/L2), word frequency (high/low), and hemisphere (left/right), were investigated. L1 words elicited significantly greater brain activation than L2 words, regardless of semantic knowledge, particularly in the superior/middle temporal and inferior parietal regions (angular/supramarginal gyri). The greater L1-elicited activation in these regions suggests that they are phonological loci, reflecting processes tuned to the phonology of the native language, while phonologically unfamiliar L2 words were processed like nonword auditory stimuli. The activation was bilateral in the auditory and superior/middle temporal regions. Hemispheric asymmetry was observed in the inferior frontal region (right dominant), and in the inferior parietal region with interactions: low-frequency words elicited more right-hemispheric activation (particularly in the supramarginal gyrus), while high-frequency words elicited more left-hemispheric activation (particularly in the angular gyrus). The present results reveal the strong involvement of a bilateral language network in children's brains depending more on right-hemispheric processing while acquiring unfamiliar/low-frequency words. A right-to-left shift in laterality should occur in the inferior parietal region, as lexical knowledge increases irrespective of language.

  7. LARGE SCALE GLAZED

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    IN NEW TYPES OF LARGE SCALE AND VERY THIN, GLAZED CONCRETE FAÇADES IN BUILDING. IF SUCH ARE INTRODUCED IN AN ARCHITECTURAL CONTEXT THEY WILL HAVE A DISTINCTIVE IMPACT ON THE VISUAL EXPRESSION OF THE BUILDING. THE QUESTION IS WHAT KIND. THAT I WILL ATTEMPT TO ANSWER IN THIS ARTICLE THROUGH OBSERVATION......WORLD FAMOUS ARCHITECTS CHALLENGE TODAY THE EXPOSURE OF CONCRETE IN THEIR ARCHITECTURE. IT IS MY HOPE TO BE ABLE TO COMPLEMENT THESE. I TRY TO DEVELOP NEW AESTHETIC POTENTIALS FOR THE CONCRETE AND CERAMICS, IN LARGE SCALES THAT HAS NOT BEEN SEEN BEFORE IN THE CERAMIC AREA. IT IS EXPECTED TO RESULT...

  8. Large scale full-length cDNA sequencing reveals a unique genomic landscape in a lepidopteran model insect, Bombyx mori.

    Science.gov (United States)

    Suetsugu, Yoshitaka; Futahashi, Ryo; Kanamori, Hiroyuki; Kadono-Okuda, Keiko; Sasanuma, Shun-ichi; Narukawa, Junko; Ajimura, Masahiro; Jouraku, Akiya; Namiki, Nobukazu; Shimomura, Michihiko; Sezutsu, Hideki; Osanai-Futahashi, Mizuko; Suzuki, Masataka G; Daimon, Takaaki; Shinoda, Tetsuro; Taniai, Kiyoko; Asaoka, Kiyoshi; Niwa, Ryusuke; Kawaoka, Shinpei; Katsuma, Susumu; Tamura, Toshiki; Noda, Hiroaki; Kasahara, Masahiro; Sugano, Sumio; Suzuki, Yutaka; Fujiwara, Haruhiko; Kataoka, Hiroshi; Arunkumar, Kallare P; Tomar, Archana; Nagaraju, Javaregowda; Goldsmith, Marian R; Feng, Qili; Xia, Qingyou; Yamamoto, Kimiko; Shimada, Toru; Mita, Kazuei

    2013-09-01

    The establishment of a complete genomic sequence of silkworm, the model species of Lepidoptera, laid a foundation for its functional genomics. A more complete annotation of the genome will benefit functional and comparative studies and accelerate extensive industrial applications for this insect. To realize these goals, we embarked upon a large-scale full-length cDNA collection from 21 full-length cDNA libraries derived from 14 tissues of the domesticated silkworm and performed full sequencing by primer walking for 11,104 full-length cDNAs. The large average intron size was 1904 bp, resulting from a high accumulation of transposons. Using gene models predicted by GLEAN and published mRNAs, we identified 16,823 gene loci on the silkworm genome assembly. Orthology analysis of 153 species, including 11 insects, revealed that among three Lepidoptera including Monarch and Heliconius butterflies, the 403 largest silkworm-specific genes were composed mainly of protective immunity, hormone-related, and characteristic structural proteins. Analysis of testis-/ovary-specific genes revealed distinctive features of sexual dimorphism, including depletion of ovary-specific genes on the Z chromosome in contrast to an enrichment of testis-specific genes. More than 40% of genes expressed in specific tissues mapped in tissue-specific chromosomal clusters. The newly obtained FL-cDNA sequences enabled us to annotate the genome of this lepidopteran model insect more accurately, enhancing genomic and functional studies of Lepidoptera and comparative analyses with other insect orders, and yielding new insights into the evolution and organization of lepidopteran-specific genes.

  9. Effects of sex and proficiency in second language processing as revealed by a large-scale fNIRS study of school-aged children.

    Science.gov (United States)

    Sugiura, Lisa; Ojima, Shiro; Matsuba-Kurita, Hiroko; Dan, Ippeita; Tsuzuki, Daisuke; Katura, Takusige; Hagiwara, Hiroko

    2015-10-01

    Previous neuroimaging studies in adults have revealed that first and second languages (L1/L2) share similar neural substrates, and that proficiency is a major determinant of the neural organization of L2 in the lexical-semantic and syntactic domains. However, little is known about neural substrates of children in the phonological domain, or about sex differences. Here, we conducted a large-scale study (n = 484) of school-aged children using functional near-infrared spectroscopy and a word repetition task, which requires a great extent of phonological processing. We investigated cortical activation during word processing, emphasizing sex differences, to clarify similarities and differences between L1 and L2, and proficiency-related differences during early L2 learning. L1 and L2 shared similar neural substrates with decreased activation in L2 compared to L1 in the posterior superior/middle temporal and angular/supramarginal gyri for both sexes. Significant sex differences were found in cortical activation within language areas during high-frequency word but not during low-frequency word processing. During high-frequency word processing, widely distributed areas including the angular/supramarginal gyri were activated in boys, while more restricted areas, excluding the angular/supramarginal gyri were activated in girls. Significant sex differences were also found in L2 proficiency-related activation: activation significantly increased with proficiency in boys, whereas no proficiency-related differences were found in girls. Importantly, cortical sex differences emerged with proficiency. Based on previous research, the present results indicate that sex differences are acquired or enlarged during language development through different cognitive strategies between sexes, possibly reflecting their different memory functions.

  10. Effects of sex and proficiency in second language processing as revealed by a large-scale fNIRS study of school-aged children.

    Science.gov (United States)

    Sugiura, Lisa; Ojima, Shiro; Matsuba-Kurita, Hiroko; Dan, Ippeita; Tsuzuki, Daisuke; Katura, Takusige; Hagiwara, Hiroko

    2015-10-01

    Previous neuroimaging studies in adults have revealed that first and second languages (L1/L2) share similar neural substrates, and that proficiency is a major determinant of the neural organization of L2 in the lexical-semantic and syntactic domains. However, little is known about neural substrates of children in the phonological domain, or about sex differences. Here, we conducted a large-scale study (n = 484) of school-aged children using functional near-infrared spectroscopy and a word repetition task, which requires a great extent of phonological processing. We investigated cortical activation during word processing, emphasizing sex differences, to clarify similarities and differences between L1 and L2, and proficiency-related differences during early L2 learning. L1 and L2 shared similar neural substrates with decreased activation in L2 compared to L1 in the posterior superior/middle temporal and angular/supramarginal gyri for both sexes. Significant sex differences were found in cortical activation within language areas during high-frequency word but not during low-frequency word processing. During high-frequency word processing, widely distributed areas including the angular/supramarginal gyri were activated in boys, while more restricted areas, excluding the angular/supramarginal gyri were activated in girls. Significant sex differences were also found in L2 proficiency-related activation: activation significantly increased with proficiency in boys, whereas no proficiency-related differences were found in girls. Importantly, cortical sex differences emerged with proficiency. Based on previous research, the present results indicate that sex differences are acquired or enlarged during language development through different cognitive strategies between sexes, possibly reflecting their different memory functions. PMID:26147179

  11. ON THE RELATIONSHIP BETWEEN AMBIENT DOSE EQUIVALENT AND ABSORBED DOSE IN AIR IN THE CASE OF LARGE-SCALE CONTAMINATION OF THE ENVIRONMENT BY RADIOACTIVE CESIUM

    Directory of Open Access Journals (Sweden)

    V. P. Ramzaev

    2015-01-01

    by a value of 0.52 Sv/Sv. This value is valid for the remote period after the severe radiation accident that had resulted in large-scale contamination of the environment by radioactive cesium. The findings of this study are discussed in comparison with results obtained by other researches shortly after the Chernobyl and Fukushima accidents.

  12. A Large-Scale Genetic Analysis Reveals a Strong Contribution of the HLA Class II Region to Giant Cell Arteritis Susceptibility

    NARCIS (Netherlands)

    David Carmona, F.; Mackie, Sarah L.; Martin, Jose-Ezequiel; Taylor, John C.; Vaglio, Augusto; Eyre, Stephen; Bossini-Castillo, Lara; Castaneda, Santos; Cid, Maria C.; Hernandez-Rodriguez, Jose; Prieto-Gonzalez, Sergio; Solans, Roser; Ramentol-Sintas, Marc; Francisca Gonzalez-Escribano, M.; Ortiz-Fernandez, Lourdes; Morado, Inmaculada C.; Narvaez, Javier; Miranda-Filloy, Jose A.; Beretta, Lorenzo; Lunardi, Claudio; Cimmino, Marco A.; Gianfreda, Davide; Santilli, Daniele; Ramirez, Giuseppe A.; Soriano, Alessandra; Muratore, Francesco; Pazzola, Giulia; Addimanda, Olga; Wijmenga, Cisca; Witte, Torsten; Schirmer, Jan H.; Moosig, Frank; Schoenau, Verena; Franke, Andre; Palm, Oyvind; Molberg, Oyvind; Diamantopoulos, Andreas P.; Carette, Simon; Cuthbertson, David; Forbess, Lindsy J.; Hoffman, Gary S.; Khalidi, Nader A.; Koening, Curry L.; Langford, Carol A.; McAlear, Carol A.; Moreland, Larry; Monach, Paul A.; Pagnoux, Christian; Seo, Philip; Spiera, Robert; Sreih, Antoine G.; Warrington, Kenneth J.; Ytterberg, Steven R.; Gregersen, Peter K.; Pease, Colin T.; Gough, Andrew; Green, Michael; Hordon, Lesley; Jarrett, Stephen; Watts, Richard; Levy, Sarah; Patel, Yusuf; Kamath, Sanjeet; Dasgupta, Bhaskar; Worthington, Jane; Koeleman, Bobby P. C.; de Bakker, Paul I. W.; Barrett, Jennifer H.; Salvarani, Carlo; Merkel, Peter A.; Gonzalez-Gay, Miguel A.; Morgan, Ann W.; Martin, Javier

    2015-01-01

    We conducted a large-scale genetic analysis on giant cell arteritis (GCA), a polygenic immune-mediated vasculitis. A case-control cohort, comprising 1,651 case subjects with GCA and 15,306 unrelated control subjects from six different countries of European ancestry, was genotyped by the Immunochip a

  13. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  14. Cooking practices, air quality, and the acceptability of advanced cookstoves in Haryana, India: an exploratory study to inform large-scale interventions

    Directory of Open Access Journals (Sweden)

    Rupak Mukhopadhyay

    2012-09-01

    Full Text Available Background: In India, approximately 66% of households rely on dung or woody biomass as fuels for cooking. These fuels are burned under inefficient conditions, leading to household air pollution (HAP and exposure to smoke containing toxic substances. Large-scale intervention efforts need to be informed by careful piloting to address multiple methodological and sociocultural issues. This exploratory study provides preliminary data for such an exercise from Palwal District, Haryana, India. Methods: Traditional cooking practices were assessed through semi-structured interviews in participating households. Philips and Oorja, two brands of commercially available advanced cookstoves with small blowers to improve combustion, were deployed in these households. Concentrations of particulate matter (PM with a diameter <2.5 μm (PM2.5 and carbon monoxide (CO related to traditional stove use were measured using real-time and integrated personal, microenvironmental samplers for optimizing protocols to evaluate exposure reduction. Qualitative data on acceptability of advanced stoves and objective measures of stove usage were also collected. Results: Twenty-eight of the thirty-two participating households had outdoor primary cooking spaces. Twenty households had liquefied petroleum gas (LPG but preferred traditional stoves as the cost of LPG was higher and because meals cooked on traditional stoves were perceived to taste better. Kitchen area concentrations and kitchen personal concentrations assessed during cooking events were very high, with respective mean PM2.5 concentrations of 468 and 718 µg/m3. Twenty-four hour outdoor concentrations averaged 400 µg/m3. Twenty-four hour personal CO concentrations ranged between 0.82 and 5.27 ppm. The Philips stove was used more often and for more hours than the Oorja. Conclusions: The high PM and CO concentrations reinforce the need for interventions that reduce HAP exposure in the aforementioned community. Of the two

  15. Large scale tracking algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  16. Large scale traffic simulations

    Energy Technology Data Exchange (ETDEWEB)

    Nagel, K.; Barrett, C.L. [Los Alamos National Lab., NM (United States)]|[Santa Fe Institute, NM (United States); Rickert, M. [Los Alamos National Lab., NM (United States)]|[Universitaet zu Koeln (Germany)

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  17. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  18. Volunteer Conservation Action Data Reveals Large-Scale and Long-Term Negative Population Trends of a Widespread Amphibian, the Common Toad (Bufo bufo)

    Science.gov (United States)

    Petrovan, Silviu O.

    2016-01-01

    Rare and threatened species are the most frequent focus of conservation science and action. With the ongoing shift from single-species conservation towards the preservation of ecosystem services, there is a greater need to understand abundance trends of common species because declines in common species can disproportionately impact ecosystems function. We used volunteer-collected data in two European countries, the United Kingdom (UK) and Switzerland, since the 1970s to assess national and regional trends for one of Europe’s most abundant amphibian species, the common toad (Bufo bufo). Millions of toads were moved by volunteers across roads during this period in an effort to protect them from road traffic. For Switzerland, we additionally estimated trends for the common frog (Rana temporaria), a similarly widespread and common amphibian species. We used state-space models to account for variability in detection and effort and included only populations with at least 5 years of data; 153 populations for the UK and 141 for Switzerland. Common toads declined continuously in each decade in both countries since the 1980s. Given the declines, this common species almost qualifies for International Union for the Conservation of Nature (IUCN) red-listing over this period despite volunteer conservation efforts. Reasons for the declines and wider impacts remain unknown. By contrast, common frog populations were stable or increasing in Switzerland, although there was evidence of declines after 2003. “Toads on Roads” schemes are vital citizen conservation action projects, and the data from such projects can be used for large scale trend estimations of widespread amphibians. We highlight the need for increased research into the status of common amphibian species in addition to conservation efforts focusing on rare and threatened species. PMID:27706154

  19. Discreteness and Large Scale Surjections

    OpenAIRE

    Austin, Kyle

    2015-01-01

    We study the concept of coarse disjointness and large scale $n$-to-$1$ functions. As a byproduct, we obtain an Ostrand-type characterization of asymptotic dimension for coarse structures. It is shown that properties like finite asymptotic dimension, coarse finitism, large scale weak paracompactness, ect. are all invariants of coarsely $n$-to-$1$ functions. Metrizability of large scale structures is also investigated.

  20. A Large-Scale Genetic Analysis Reveals a Strong Contribution of the HLA Class II Region to Giant Cell Arteritis Susceptibility

    Science.gov (United States)

    Carmona, F. David; Mackie, Sarah L.; Martín, Jose-Ezequiel; Taylor, John C.; Vaglio, Augusto; Eyre, Stephen; Bossini-Castillo, Lara; Castañeda, Santos; Cid, Maria C.; Hernández-Rodríguez, José; Prieto-González, Sergio; Solans, Roser; Ramentol-Sintas, Marc; González-Escribano, M. Francisca; Ortiz-Fernández, Lourdes; Morado, Inmaculada C.; Narváez, Javier; Miranda-Filloy, José A.; Martínez-Berriochoa, Agustín; Unzurrunzaga, Ainhoa; Hidalgo-Conde, Ana; Madroñero-Vuelta, Ana B.; Fernández-Nebro, Antonio; Ordóñez-Cañizares, M. Carmen; Escalante, Begoña; Marí-Alfonso, Begoña; Sopeña, Bernardo; Magro, César; Raya, Enrique; Grau, Elena; Román, José A.; de Miguel, Eugenio; López-Longo, F. Javier; Martínez, Lina; Gómez-Vaquero, Carmen; Fernández-Gutiérrez, Benjamín; Rodríguez-Rodríguez, Luis; Díaz-López, J. Bernardino; Caminal-Montero, Luis; Martínez-Zapico, Aleida; Monfort, Jordi; Tío, Laura; Sánchez-Martín, Julio; Alegre-Sancho, Juan J.; Sáez-Comet, Luis; Pérez-Conesa, Mercedes; Corbera-Bellalta, Marc; García-Villanueva, M. Jesús; Fernández-Contreras, M. Encarnación; Sanchez-Pernaute, Olga; Blanco, Ricardo; Ortego-Centeno, Norberto; Ríos-Fernández, Raquel; Callejas, José L.; Fanlo-Mateo, Patricia; Martínez-Taboada, Víctor M.; Beretta, Lorenzo; Lunardi, Claudio; Cimmino, Marco A.; Gianfreda, Davide; Santilli, Daniele; Ramirez, Giuseppe A.; Soriano, Alessandra; Muratore, Francesco; Pazzola, Giulia; Addimanda, Olga; Wijmenga, Cisca; Witte, Torsten; Schirmer, Jan H.; Moosig, Frank; Schönau, Verena; Franke, Andre; Palm, Øyvind; Molberg, Øyvind; Diamantopoulos, Andreas P.; Carette, Simon; Cuthbertson, David; Forbess, Lindsy J.; Hoffman, Gary S.; Khalidi, Nader A.; Koening, Curry L.; Langford, Carol A.; McAlear, Carol A.; Moreland, Larry; Monach, Paul A.; Pagnoux, Christian; Seo, Philip; Spiera, Robert; Sreih, Antoine G.; Warrington, Kenneth J.; Ytterberg, Steven R.; Gregersen, Peter K.; Pease, Colin T.; Gough, Andrew; Green, Michael; Hordon, Lesley; Jarrett, Stephen; Watts, Richard; Levy, Sarah; Patel, Yusuf; Kamath, Sanjeet; Dasgupta, Bhaskar; Worthington, Jane; Koeleman, Bobby P.C.; de Bakker, Paul I.W.; Barrett, Jennifer H.; Salvarani, Carlo; Merkel, Peter A.; González-Gay, Miguel A.; Morgan, Ann W.; Martín, Javier

    2015-01-01

    We conducted a large-scale genetic analysis on giant cell arteritis (GCA), a polygenic immune-mediated vasculitis. A case-control cohort, comprising 1,651 case subjects with GCA and 15,306 unrelated control subjects from six different countries of European ancestry, was genotyped by the Immunochip array. We also imputed HLA data with a previously validated imputation method to perform a more comprehensive analysis of this genomic region. The strongest association signals were observed in the HLA region, with rs477515 representing the highest peak (p = 4.05 × 10−40, OR = 1.73). A multivariate model including class II amino acids of HLA-DRβ1 and HLA-DQα1 and one class I amino acid of HLA-B explained most of the HLA association with GCA, consistent with previously reported associations of classical HLA alleles like HLA-DRB1∗04. An omnibus test on polymorphic amino acid positions highlighted DRβ1 13 (p = 4.08 × 10−43) and HLA-DQα1 47 (p = 4.02 × 10−46), 56, and 76 (both p = 1.84 × 10−45) as relevant positions for disease susceptibility. Outside the HLA region, the most significant loci included PTPN22 (rs2476601, p = 1.73 × 10−6, OR = 1.38), LRRC32 (rs10160518, p = 4.39 × 10−6, OR = 1.20), and REL (rs115674477, p = 1.10 × 10−5, OR = 1.63). Our study provides evidence of a strong contribution of HLA class I and II molecules to susceptibility to GCA. In the non-HLA region, we confirmed a key role for the functional PTPN22 rs2476601 variant and proposed other putative risk loci for GCA involved in Th1, Th17, and Treg cell function. PMID:25817017

  1. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  2. Large scale fusion of gray matter and resting-state functional MRI reveals common and shared biological markers across the psychosis spectrum in the B-SNIP cohort

    Directory of Open Access Journals (Sweden)

    Zheng eWang

    2015-12-01

    Full Text Available To investigate whether aberrant interactions between brain structure and function present similarly or differently across probands with psychotic illnesses (schizophrenia (SZ, schizoaffective disorder (SAD, and bipolar I disorder with psychosis (BP and whether these deficits are shared with their first-degree non-psychotic relatives. A total of 1199 subjects were assessed, including 220 SZ, 147 SAD, 180 psychotic BP, 150 first-degree relatives of SZ, 126 SAD relatives, 134 BP relatives and 242 healthy controls. All subjects underwent structural MRI (sMRI and resting-state functional MRI (rs-fMRI scanning. Joint independent analysis (jICA was used to fuse sMRI gray matter (GM and rs-fMRI amplitude of low frequency fluctuations (ALFF data to identify the relationship between the two modalities. Joint ICA revealed two significantly fused components. The association between functional brain alteration in a prefrontal-striatal-thalamic-cerebellar network and structural abnormalities in the default mode network (DMN was found to be common across psychotic diagnoses and correlated with cognitive function, social function and Schizo-Bipolar Scale (SBS scores. The fused alteration in the temporal lobe was unique to SZ and SAD. The above effects were not seen in any relative group (including those with cluster-A personality. Using a multivariate fused approach involving two widely used imaging markers we demonstrate both shared and distinct biological traits across the psychosis spectrum. Further, our results suggest that the above traits are psychosis biomarkers rather than endophenotypes.

  3. Association analyses of large-scale glycan microarray data reveal novel host-specific substructures in influenza A virus binding glycans

    Science.gov (United States)

    Zhao, Nan; Martin, Brigitte E.; Yang, Chun-Kai; Luo, Feng; Wan, Xiu-Feng

    2015-10-01

    Influenza A viruses can infect a wide variety of animal species and, occasionally, humans. Infection occurs through the binding formed by viral surface glycoprotein hemagglutinin and certain types of glycan receptors on host cell membranes. Studies have shown that the α2,3-linked sialic acid motif (SA2,3Gal) in avian, equine, and canine species; the α2,6-linked sialic acid motif (SA2,6Gal) in humans; and SA2,3Gal and SA2,6Gal in swine are responsible for the corresponding host tropisms. However, more detailed and refined substructures that determine host tropisms are still not clear. Thus, in this study, we applied association mining on a set of glycan microarray data for 211 influenza viruses from five host groups: humans, swine, canine, migratory waterfowl, and terrestrial birds. The results suggest that besides Neu5Acα2-6Galβ, human-origin viruses could bind glycans with Neu5Acα2-8Neu5Acα2-8Neu5Ac and Neu5Gcα2-6Galβ1-4GlcNAc substructures; Galβ and GlcNAcβ terminal substructures, without sialic acid branches, were associated with the binding of human-, swine-, and avian-origin viruses; sulfated Neu5Acα2-3 substructures were associated with the binding of human- and swine-origin viruses. Finally, through three-dimensional structure characterization, we revealed that the role of glycan chain shapes is more important than that of torsion angles or of overall structural similarities in virus host tropisms.

  4. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  5. Testing gravity on Large Scales

    OpenAIRE

    Raccanelli Alvise

    2013-01-01

    We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep...

  6. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  7. Japanese large-scale interferometers

    International Nuclear Information System (INIS)

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D

  8. Large-scale river regulation

    International Nuclear Information System (INIS)

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  9. Testing gravity on Large Scales

    Directory of Open Access Journals (Sweden)

    Raccanelli Alvise

    2013-09-01

    Full Text Available We show how it is possible to test general relativity and different models of gravity via Redshift-Space Distortions using forthcoming cosmological galaxy surveys. However, the theoretical models currently used to interpret the data often rely on simplifications that make them not accurate enough for precise measurements. We will discuss improvements to the theoretical modeling at very large scales, including wide-angle and general relativistic corrections; we then show that for wide and deep surveys those corrections need to be taken into account if we want to measure the growth of structures at a few percent level, and so perform tests on gravity, without introducing systematic errors. Finally, we report the results of some recent cosmological model tests carried out using those precise models.

  10. Polymer Physics of the Large-Scale Structure of Chromatin.

    Science.gov (United States)

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments. PMID:27659986

  11. Large Scale Magnetostrictive Valve Actuator

    Science.gov (United States)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  12. Large scale cluster computing workshop

    Energy Technology Data Exchange (ETDEWEB)

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  13. Handbook of Large-Scale Random Networks

    CERN Document Server

    Bollobas, Bela; Miklos, Dezso

    2008-01-01

    Covers various aspects of large-scale networks, including mathematical foundations and rigorous results of random graph theory, modeling and computational aspects of large-scale networks, as well as areas in physics, biology, neuroscience, sociology and technical areas

  14. Large-Scale Information Systems

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  15. Architecture of Large-Scale Systems

    OpenAIRE

    Koschel, Arne; Astrova, Irina; Deutschkämer, Elena; Ester, Jacob; Feldmann, Johannes

    2013-01-01

    In this paper various techniques in relation to large-scale systems are presented. At first, explanation of large-scale systems and differences from traditional systems are given. Next, possible specifications and requirements on hardware and software are listed. Finally, examples of large-scale systems are presented.

  16. Conundrum of the Large Scale Streaming

    CERN Document Server

    Malm, T M

    1999-01-01

    The etiology of the large scale peculiar velocity (large scale streaming motion) of clusters would increasingly seem more tenuous, within the context of the gravitational instability hypothesis. Are there any alternative testable models possibly accounting for such large scale streaming of clusters?

  17. Establishing a low-NOx and high-burnout performance in a large-scale, deep-air-staging laboratory furnace fired by a heavy-oil swirl burner

    International Nuclear Information System (INIS)

    A combustion configuration consisting of a low-NOx heavy-oil swirl burner along with overfire air (OFA) and flue gas recirculation (FGR), was developed for the low-NOx heavy oil combustion in a lab-scale furnace. Combustion experiments were performed with various heavy-oil supply temperatures, different oil spray nozzle types, and with or without feeding FGR. The combustion configuration was found to achieve low NOx and acceptable CO emissions (levels of 240–286 mg/m3 and 45–175 mg/m3 at 3% O2, respectively), even under the conditions without FGR. Increasing the FGR ratio from 0 to 10% attained a NOx reduction of 9% without an obvious increase in CO emission. In the oil atomizing nozzle type aspect, a radial bias pattern, which was designed to lower NOx emissions and improve ignition by regulating fuel bias combustion, actually resulted in higher NOx and CO emissions than those using a uniformly atomizing pattern. Decreasing the heavy-oil supply temperature (from 154 °C to 132 °C) prolonged the fuel combustion process and reduced NOx emissions. Finally, the optimized operation with low NOx and CO emissions (240 mg/m3 and 45 mg/m3 at 3% O2, respectively) was established. - Highlights: • Developing a low-NOx heavy-oil combustion configuration. • Trialing the combustion configuration in a large-scale laboratory furnace. • Evaluating combustion and NOx emission characteristics under various conditions. • Establishing an optimized low-NOx and high-burnout performance

  18. Water Implications of Large-Scale Land Acquisitions in Ghana

    OpenAIRE

    Timothy Olalekan Williams; Benjamin Gyampoh; Fred Kizito; Regassa Namara

    2012-01-01

    This paper examines the water dimensions of recent large-scale land acquisitions for biofuel production in the Ashanti, Brong-Ahafo and Northern regions of Ghana. Using secondary sources of data complemented by individual and group interviews, the paper reveals an almost universal lack of consideration of the implications of large-scale land deals for crop water requirements, the ecological functions of freshwater ecosystems and water rights of local smallholder farmers and other users. It do...

  19. An experimental system for thermal performance test of large-scale air-cooling towers%大型间接空冷机组空冷塔热力性能实验系统

    Institute of Scientific and Technical Information of China (English)

    席新铭; 郭永红; 杜小泽; 杨立军; 杨勇平

    2014-01-01

    On the basis of a 600 MW indirect air cooling unit,an experimental system for large-scale air-coo-ing tower was designed by means of the similarity principles,with a proportion of 1 to 30.This system con-sists of modeling air cooling tower,air cooling modeling radiator,heat load power control system and data collection system.During the experiment,the heat load can be regulated by the control system,according to the parameters'value and experimental purpose.The calculation shows the measurement error of heat transfer coefficient of this experimental system is less than 20%.Through this system,the flow and heat transfer characteristics at inside and outside of the dry-cooling tower under various environmental condi-tions can be measured.Moreover,the correlations of flow and heat transfer of air-cooled heat exchanger and the off-design performance of the indirect dry cooling system can be obtained.The hot plume recirculation flows,anti-freezing in winter and the thermo-hydraulic performances for stack and tower integration design can be investigated using this experimental system.The design principle,basic parameters and measuring errors were analyzed.%以某600 MW间接空冷机组的空冷塔为原型,按1∶30的比例,根据相似原理设计了1套自然环境条件下大型间接空冷塔的实验系统。该系统由模型空冷塔、空冷模型散热器、散热热负荷控制系统及数据采集系统等组成。实验期间可根据数值大小及实验目的利用控制系统调整热负荷。对实验数据计算及分析,表明:该空冷塔实验系统的换热系数测量误差小于20%;利用该实验系统,可获得不同环境气象条件下,空冷塔内部空气流场特性,以及散热负荷的空间分布规律;可进行间接空冷系统热空气回流、冬季防冻及烟塔合一条件下塔内气体流动传热机理的研究。

  20. Combustion of biodiesel in a large-scale laboratory furnace

    International Nuclear Information System (INIS)

    Combustion tests in a large-scale laboratory furnace were carried out to assess the feasibility of using biodiesel as a fuel in industrial furnaces. For comparison purposes, petroleum-based diesel was also used as a fuel. Initially, the performance of the commercial air-assisted atomizer used in the combustion tests was scrutinized under non-reacting conditions. Subsequently, flue gas data, including PM (particulate matter), were obtained for various flame conditions to quantify the effects of the atomization quality and excess air on combustion performance. The combustion data was complemented with in-flame temperature measurements for two representative furnace operating conditions. The results reveal that (i) CO emissions from biodiesel and diesel combustion are rather similar and not affected by the atomization quality; (ii) NOx emissions increase slightly as spray quality improves for both liquid fuels, but NOx emissions from biodiesel combustion are always lower than those from diesel combustion; (iii) CO emissions decrease rapidly for both liquid fuels as the excess air level increases up to an O2 concentration in the flue gas of 2%, beyond which they remain unchanged; (iv) NOx emissions increase with an increase in the excess air level for both liquid fuels; (v) the quality of the atomization has a significant impact on PM emissions, with the diesel combustion yielding significantly higher PM emissions than biodiesel combustion; and (vi) diesel combustion originates PM with elements such as Cr, Na, Ni and Pb, while biodiesel combustion produces PM with elements such as Ca, Mg and Fe. - Highlights: • CO emissions from biodiesel and diesel tested are similar. • NOx emissions from biodiesel tested are lower than those from diesel tested. • Diesel tested yields significantly higher PM (particulate matter) emissions than biodiesel tested. • Diesel tested originates PM with Cr, Na, Ni and Pb, while biodiesel tested produces PM with Ca, Mg and Fe

  1. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  2. Synthesis of Small and Large scale Dynamos

    CERN Document Server

    Subramanian, K

    2000-01-01

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel anology between quantum mechanical tunneling and the generation of large-scale fields. Large scale fields develop via the $\\alpha$-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using full MHD.

  3. Large scale dynamos with ambipolar diffusion nonlinearity

    CERN Document Server

    Brandenburg, A; Brandenburg, Axel; Subramanian, Kandaswamy

    2000-01-01

    It is shown that ambipolar diffusion as a toy nonlinearity leads to very similar behaviour of large scale turbulent dynamos as full MHD. This is demonstrated using both direct simulations in a periodic box and a closure model for the magnetic correlation functions applicable to infinite space. Large scale fields develop via a nonlocal inverse cascade as described by the alpha-effect. However, because magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number.

  4. Large scale mechanical metamaterials as seismic shields

    Science.gov (United States)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  5. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Morteza Hajizadeh-Oghaz; Reza Shoja Razavi; Mohammadreza Loghman Estarki

    2014-08-01

    Yttria–stabilized zirconia nanopowders were synthesized on a relatively large scale using Pechini method. In the present paper, nearly spherical yttria-stabilized zirconia nanopowders with tetragonal structure were synthesized by Pechini process from zirconium oxynitrate hexahydrate, yttrium nitrate, citric acid and ethylene glycol. The phase and structural analyses were accomplished by X-ray diffraction; morphological analysis was carried out by field emission scanning electron microscopy and transmission electron microscopy. The results revealed nearly spherical yttria–stabilized zirconia powder with tetragonal crystal structure and chemical purity of 99.1% by inductively coupled plasma optical emission spectroscopy on a large scale.

  6. Privacy Preserving Large-Scale Rating Data Publishing

    Directory of Open Access Journals (Sweden)

    Xiaoxun Sun

    2013-02-01

    Full Text Available Large scale rating data usually contains both ratings of sensitive and non-sensitive issues, and the ratings of sensitive issues belong to personal privacy. Even when survey participants do not reveal any of their ratings, their survey records are potentially identifiable by using information from other public sources. In order to protect the privacy in the large-scale rating data, it is important to propose new privacy principles which consider the properties of the rating data. Moreover, given the privacy principle, how to efficiently determine whether the rating data satisfied the required privacy principle is crucial as well. Furthermore, if the privacy principle is not satisfied, an efficient method is needed to securely publish the large-scale rating data. In this paper, all these problem will be addressed.

  7. Large-scale vapor transport of remotely evaporated seawater by a Rossby wave response to typhoon forcing during the Baiu/Meiyu season as revealed by the JRA-55 reanalysis

    Science.gov (United States)

    Kudo, Tadasuke; Kawamura, Ryuichi; Hirata, Hidetaka; Ichiyanagi, Kimpei; Tanoue, Masahiro; Yoshimura, Kei

    2014-07-01

    The modulation of large-scale moisture transport from the tropics into East Asia in response to typhoon-induced heating during the mature stage of the Baiu/Meiyu season is investigated using the Japanese 55-year reanalysis (JRA-55), aided by a Rayleigh-type global isotope circulation model (ICM). We highlighted the typhoons that migrate northward along the western periphery of the North Pacific subtropical high and approach the vicinity of Japan. Anomalous anticyclonic circulations to the northeast and southeast of typhoons and cyclonic circulation to their west become evident as they migrate toward Japan, which could be interpreted as a Rossby wave response to typhoon heating. These resultant anomalous circulation patterns form moisture conveyor belt (MCB) stretching from the South Asian monsoon region to East Asia via the confluence region between the monsoon westerlies and central Pacific easterlies. The ICM results confirm that the well-defined nature of the MCB leads to penetration of the Indian Ocean, South China Sea, Philippine Sea, and Pacific Ocean water vapors into western Japan. The typhoons have the potential to accumulate large amounts of moisture from distant tropical oceans through the interaction of their Rossby wave response with the background flow. In the case of a typical typhoon, the total precipitable water around the typhoon center as it approaches Japan is maintained by the moisture supply from distant oceans rather than from the underlying ocean, which indirectly leads to the occurrence of heavy rainfall over western Japan.

  8. Global Wildfire Forecasts Using Large Scale Climate Indices

    Science.gov (United States)

    Shen, Huizhong; Tao, Shu

    2016-04-01

    Using weather readings, fire early warning can provided forecast 4-6 hour in advance to minimize fire loss. The benefit would be dramatically enhanced if relatively accurate long-term projection can be also provided. Here we present a novel method for predicting global fire season severity (FSS) at least three months in advance using multiple large-scale climate indices (CIs). The predictive ability is proven effective for various geographic locations and resolution. Globally, as well as in most continents, the El Niño Southern Oscillation (ENSO) is the dominant driving force controlling interannual FSS variability, whereas other CIs also play indispensable roles. We found that a moderate El Niño event is responsible for 465 (272-658 as interquartile range) Tg carbon release and an annual increase of 29,500 (24,500-34,800) deaths from inhalation exposure to air pollutants. Southeast Asia accounts for half of the deaths. Both intercorrelation and interaction of WPs and CIs are revealed, suggesting possible climate-induced modification of fire responses to weather conditions. Our models can benefit fire management in response to climate change.

  9. Large-scale integration of small molecule-induced genome-wide transcriptional responses, Kinome-wide binding affinities and cell-growth inhibition profiles reveal global trends characterizing systems-level drug action

    Directory of Open Access Journals (Sweden)

    Dusica eVidovic

    2014-09-01

    Full Text Available The Library of Integrated Network-based Cellular Signatures (LINCS project is a large-scale coordinated effort to build a comprehensive systems biology reference resource. The goals of the program include the generation of a very large multidimensional data matrix and informatics and computational tools to integrate, analyze, and make the data readily accessible. LINCS data include genome-wide transcriptional signatures, biochemical protein binding profiles, cellular phenotypic response profiles and various other datasets for a wide range of cell model systems and molecular and genetic perturbations. Here we present a partial survey of this data facilitated by data standards and in particular a robust compound standardization workflow; we integrated several types of LINCS signatures and analyzed the results with a focus on mechanism of action and chemical compounds. We illustrate how kinase targets can be related to disease models and relevant drugs. We identified some fundamental trends that appear to link Kinome binding profiles and transcriptional signatures to chemical information and biochemical binding profiles to transcriptional responses independent of chemical similarity. To fill gaps in the datasets we developed and applied predictive models. The results can be interpreted at the systems level as demonstrated based on a large number of signaling pathways. We can identify clear global relationships, suggesting robustness of cellular responses to chemical perturbation. Overall, the results suggest that chemical similarity is a useful measure at the systems level, which would support phenotypic drug optimization efforts. With this study we demonstrate the potential of such integrated analysis approaches and suggest prioritizing further experiments to fill the gaps in the current data.

  10. Large scale particle image velocimetry with helium filled soap bubbles

    Energy Technology Data Exchange (ETDEWEB)

    Bosbach, Johannes; Kuehn, Matthias; Wagner, Claus [German Aerospace Center (DLR), Institute of Aerodynamics and Flow Technology, Goettingen (Germany)

    2009-03-15

    The application of particle image velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of computational fluid dynamics simulations. (orig.)

  11. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  12. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  13. Large-scale Complex IT Systems

    CERN Document Server

    Sommerville, Ian; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challenges and issues in the development of large-scale complex, software-intensive systems. Central to this is the notion that we cannot separate software from the socio-technical environment in which it is used.

  14. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  15. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that cont

  16. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data dissem

  17. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  18. Hierarchical Control for Large-Scale Systems

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A class of large-seale systems, where the overall objective function is a nonlinear function of performance index of each subsystem, is investigated in this paper. This type of large-scale control problem is non-separable in the sense of conventional hierarchical control. Hierarchical control is extended in the paper to large-scale non-separable control problems, where multiobjective optimization is used as separation strategy. The large-scale non-separable control problem is embedded, under ;ertain conditions, into a family of the weighted Lagrangian formulation. The weighted Lagrangian formulation is separable with respect to subsystems and can be effectively solved using the interaction balance approach at the two lower levels in the proposed three-level solution structure. At the third level, the weighting vector for the weighted Lagrangian formulation is adjusted iteratively to search the optimal weighting vector with which the optimal of the original large-scale non-separable control problem is obtained. Theoretical base of the algorithm is established. Simulation shows that the algorithm is effective.

  19. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo...

  20. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    S F King

    2004-02-01

    We review experimental and theoretical developments in inflation and its application to structure formation, including the curvation idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such a theory is completely natural in the framework extra dimensions with an intermediate string scale.

  1. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  2. Likelihood analysis of large-scale flows

    CERN Document Server

    Jaffe, A; Jaffe, Andrew; Kaiser, Nick

    1994-01-01

    We apply a likelihood analysis to the data of Lauer & Postman 1994. With P(k) parametrized by (\\sigma_8, \\Gamma), the likelihood function peaks at \\sigma_8\\simeq0.9, \\Gamma\\simeq0.05, indicating at face value very strong large-scale power, though at a level incompatible with COBE. There is, however, a ridge of likelihood such that more conventional power spectra do not seem strongly disfavored. The likelihood calculated using as data only the components of the bulk flow solution peaks at higher \\sigma_8, as suggested by other analyses, but is rather broad. The likelihood incorporating both bulk flow and shear gives a different picture. The components of the shear are all low, and this pulls the peak to lower amplitudes as a compromise. The velocity data alone are therefore {\\em consistent} with models with very strong large scale power which generates a large bulk flow, but the small shear (which also probes fairly large scales) requires that the power would have to be at {\\em very} large scales, which is...

  3. Large scale topic modeling made practical

    DEFF Research Database (Denmark)

    Wahlgreen, Bjarne Ørum; Hansen, Lars Kai

    2011-01-01

    Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... problem is reduced by use of large and carefully curated term set. We demonstrate the performance of the proposed system and in the process break a previously claimed ’world record’ announced April 2010 both by speed and size of problem. We show that the use of a WordNet derived vocabulary can identify...

  4. Quantum Signature of Cosmological Large Scale Structures

    CERN Document Server

    Capozziello, S; De Siena, S; Illuminati, F; Capozziello, Salvatore; Martino, Salvatore De; Siena, Silvio De; Illuminati, Fabrizio

    1998-01-01

    We demonstrate that to all large scale cosmological structures where gravitation is the only overall relevant interaction assembling the system (e.g. galaxies), there is associated a characteristic unit of action per particle whose order of magnitude coincides with the Planck action constant $h$. This result extends the class of physical systems for which quantum coherence can act on macroscopic scales (as e.g. in superconductivity) and agrees with the absence of screening mechanisms for the gravitational forces, as predicted by some renormalizable quantum field theories of gravity. It also seems to support those lines of thought invoking that large scale structures in the Universe should be connected to quantum primordial perturbations as requested by inflation, that the Newton constant should vary with time and distance and, finally, that gravity should be considered as an effective interaction induced by quantization.

  5. Wireless Secrecy in Large-Scale Networks

    CERN Document Server

    Pinto, Pedro C; Win, Moe Z

    2011-01-01

    The ability to exchange secret information is critical to many commercial, governmental, and military networks. The intrinsically secure communications graph (iS-graph) is a random graph which describes the connections that can be securely established over a large-scale network, by exploiting the physical properties of the wireless medium. This paper provides an overview of the main properties of this new class of random graphs. We first analyze the local properties of the iS-graph, namely the degree distributions and their dependence on fading, target secrecy rate, and eavesdropper collusion. To mitigate the effect of the eavesdroppers, we propose two techniques that improve secure connectivity. Then, we analyze the global properties of the iS-graph, namely percolation on the infinite plane, and full connectivity on a finite region. These results help clarify how the presence of eavesdroppers can compromise secure communication in a large-scale network.

  6. ELASTIC: A Large Scale Dynamic Tuning Environment

    Directory of Open Access Journals (Sweden)

    Andrea Martínez

    2014-01-01

    Full Text Available The spectacular growth in the number of cores in current supercomputers poses design challenges for the development of performance analysis and tuning tools. To be effective, such analysis and tuning tools must be scalable and be able to manage the dynamic behaviour of parallel applications. In this work, we present ELASTIC, an environment for dynamic tuning of large-scale parallel applications. To be scalable, the architecture of ELASTIC takes the form of a hierarchical tuning network of nodes that perform a distributed analysis and tuning process. Moreover, the tuning network topology can be configured to adapt itself to the size of the parallel application. To guide the dynamic tuning process, ELASTIC supports a plugin architecture. These plugins, called ELASTIC packages, allow the integration of different tuning strategies into ELASTIC. We also present experimental tests conducted using ELASTIC, showing its effectiveness to improve the performance of large-scale parallel applications.

  7. Gravitational Wilson Loop and Large Scale Curvature

    OpenAIRE

    Hamber, H.; Williams, R.

    2007-01-01

    In a quantum theory of gravity the gravitational Wilson loop, defined as a suitable quantum average of a parallel transport operator around a large near-planar loop, provides important information about the large-scale curvature properties of the geometry. Here we shows that such properties can be systematically computed in the strong coupling limit of lattice regularized quantum gravity, by performing local averages over loop bivectors, and over lattice rotations, using an assumed near-unifo...

  8. Large-scale instabilities of helical flows

    OpenAIRE

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-01-01

    Large-scale hydrodynamic instabilities of periodic helical flows are investigated using $3$D Floquet numerical computations. A minimal three-modes analytical model that reproduce and explains some of the full Floquet results is derived. The growth-rate $\\sigma$ of the most unstable modes (at small scale, low Reynolds number $Re$ and small wavenumber $q$) is found to scale differently in the presence or absence of anisotropic kinetic alpha (\\AKA{}) effect. When an $AKA$ effect is present the s...

  9. Cedar-a large scale multiprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Gajski, D.; Kuck, D.; Lawrie, D.; Sameh, A.

    1983-01-01

    This paper presents an overview of Cedar, a large scale multiprocessor being designed at the University of Illinois. This machine is designed to accommodate several thousand high performance processors which are capable of working together on a single job, or they can be partitioned into groups of processors where each group of one or more processors can work on separate jobs. Various aspects of the machine are described including the control methodology, communication network, optimizing compiler and plans for construction. 13 references.

  10. Large Scale Research Project, Daidalos Evaluation Framework

    OpenAIRE

    Cleary, Frances; Ponce de Leon, Miguel; GARCÍA MORENO, Marta; ROMERO VICENTE, Antonio; Roddy, Mark

    2007-01-01

    For large scale research projects operational over a phased timeframe of 2 years or more, the need to take a step back and evaluate their stance and direction is an important activity in providing relevant feedback and recommendations to guide the project towards success in its consecutive phase. The identification of measurable goals and evaluation profile procedures to effectively work towards a useful evaluation of the project was one of the main aims of the Evaluation taskforce. As part o...

  11. Relationships in Large-Scale Graph Computing

    OpenAIRE

    Petrovic, Dan

    2012-01-01

    In 2009 Grzegorz Czajkowski from Google's system infrastructure team has published an article which didn't get much attention in the SEO community at the time. It was titled "Large-scale graph computing at Google" and gave an excellent insight into the future of Google's search. This article highlights some of the little known facts which lead to transformation of Google's algorithm in the last two years.

  12. Coordination in Large-Scale Agile Development

    OpenAIRE

    Morken, Ragnar Alexander T

    2014-01-01

    In the last decade agile software development methods has become one of themost popular topics within software engineering. Agile software developmentis well accepted in small projects among the practitioner community and inrecent years, there has also been several large-scale projects adopting agilemethodologies, but there is little understanding of how such projects achieveeective coordination, which is known to be a critical factor in software engineering.This thesis describe an explorator...

  13. Increasing Quality in large scale University Courses

    Directory of Open Access Journals (Sweden)

    Inga Saatz

    2013-07-01

    Full Text Available Quality of education should be stable or permanently increased – even if the number of students rises. Quality of education is often related to possibilities for active learning and individual facilitation. This paper deals with the question how high-quality learning within oversized courses could be enabled and it presents the approach of e-flashcards that enables active learning and individual facilitation within large scale university courses.

  14. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the cosmological velocity field, but even in such a case, the standard model is isolated from the inhomogeneous ones when the whole evolution is considered. (author)

  15. Large-scale neuromorphic computing systems

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  16. Large-scale neuromorphic computing systems.

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers. PMID:27529195

  17. Large-scale neuromorphic computing systems.

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  18. Large-Scale Visual Data Analysis

    Science.gov (United States)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  19. Large-scale instabilities of helical flows

    CERN Document Server

    Cameron, Alexandre; Brachet, Marc-Étienne

    2016-01-01

    Large-scale hydrodynamic instabilities of periodic helical flows are investigated using $3$D Floquet numerical computations. A minimal three-modes analytical model that reproduce and explains some of the full Floquet results is derived. The growth-rate $\\sigma$ of the most unstable modes (at small scale, low Reynolds number $Re$ and small wavenumber $q$) is found to scale differently in the presence or absence of anisotropic kinetic alpha (\\AKA{}) effect. When an $AKA$ effect is present the scaling $\\sigma \\propto q\\; Re\\,$ predicted by the $AKA$ effect theory [U. Frisch, Z. S. She, and P. L. Sulem, Physica D: Nonlinear Phenomena 28, 382 (1987)] is recovered for $Re\\ll 1$ as expected (with most of the energy of the unstable mode concentrated in the large scales). However, as $Re$ increases, the growth-rate is found to saturate and most of the energy is found at small scales. In the absence of \\AKA{} effect, it is found that flows can still have large-scale instabilities, but with a negative eddy-viscosity sca...

  20. Monofractal nature of air temperature signals reveals their climate variability

    CERN Document Server

    Deliège, Adrien

    2014-01-01

    We use the discrete "wavelet transform microscope" to show that the surface air temperature signals of weather stations selected in Europe are monofractal. This study reveals that the information obtained in this way are richer than previous works studying long range correlations in meteorological stations. The approach presented here allows to bind the H\\"older exponents with the climate variability. We also establish that such a link does not exist with methods previously carried out.

  1. The complexity nature of large-scale software systems

    Institute of Scientific and Technical Information of China (English)

    Yan Dong; Qi Guo-Ning; Gu Xin-Jian

    2006-01-01

    In software engineering, class diagrams are often used to describe the system's class structures in Unified Modelling Language (UML). A class diagram, as a graph, is a collection of static declarative model elements, such as classes, interfaces, and the relationships of their connections with each other. In this paper, class graphs are examined within several Java software systems provided by Sun and IBM, and some new features are found. For a large-scale Java software system, its in-degree distribution tends to an exponential distribution, while its out-degree and degree distributions reveal the power-law behaviour. And then a directed preferential-random model is established to describe the corresponding degree distribution features and evolve large-scale Java software systems.

  2. Large-scale planar lightwave circuits

    Science.gov (United States)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  3. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  4. Large Scale Quantum Simulations of Nuclear Pasta

    Science.gov (United States)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 pasta configurations. This work is supported in part by DOE Grants DE-FG02-87ER40365 (Indiana University) and DE-SC0008808 (NUCLEI SciDAC Collaboration).

  5. Large scale phononic metamaterials for seismic isolation

    Energy Technology Data Exchange (ETDEWEB)

    Aravantinos-Zafiris, N. [Department of Sound and Musical Instruments Technology, Ionian Islands Technological Educational Institute, Stylianou Typaldou ave., Lixouri 28200 (Greece); Sigalas, M. M. [Department of Materials Science, University of Patras, Patras 26504 (Greece)

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  6. Neutrinos and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, Daniel J. [Daniel J. Eisenstein, Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS #20, Cambridge, MA 02138 (United States)

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  7. Large scale breeder reactor pump dynamic analyses

    International Nuclear Information System (INIS)

    The lateral natural frequency and vibration response analyses of the Large Scale Breeder Reactor (LSBR) primary pump were performed as part of the total dynamic analysis effort to obtain the fabrication release. The special features of pump modeling are outlined in this paper. The analysis clearly demonstrates the method of increasing the system natural frequency by reducing the generalized mass without significantly changing the generalized stiffness of the structure. Also, a method of computing the maximum relative and absolute steady state responses and associated phase angles at given locations is provided. This type of information is very helpful in generating response versus frequency and phase angle versus frequency plots

  8. Large-Scale PV Integration Study

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  9. Large-Scale Clustering in Bubble Models

    CERN Document Server

    Borgani, S

    1993-01-01

    We analyze the statistical properties of bubble models for the large-scale distribution of galaxies. To this aim, we realize static simulations, in which galaxies are mostly randomly arranged in the regions surrounding bubbles. As a first test, we realize simulations of the Lick map, by suitably projecting the three-dimensional simulations. In this way, we are able to safely compare the angular correlation function implied by a bubbly geometry to that of the APM sample. We find that several bubble models provide an adequate amount of large-scale correlation, which nicely fits that of APM galaxies. Further, we apply the statistics of the count-in-cell moments to the three-dimensional distribution and compare them with available observational data on variance, skewness and kurtosis. Based on our purely geometrical constructions, we find that a well defined hierarchical scaling of higher order moments up to scales $\\sim 70\\hm$. The overall emerging picture is that the bubbly geometry is well suited to reproduce ...

  10. Study on the large scale dynamo transition

    CERN Document Server

    Nigro, Giuseppina

    2010-01-01

    Using the magnetohydrodynamic (MHD) description, we develop a nonlinear dynamo model that couples the evolution of the large scale magnetic field with turbulent dynamics of the plasma at small scale by electromotive force (e.m.f.) in the induction equation at large scale. The nonlinear behavior of the plasma at small scale is described by using a MHD shell model for velocity field and magnetic field fluctuations.The shell model allow to study this problem in a large parameter regime which characterizes the dynamo phenomenon in many natural systems and which is beyond the power of supercomputers at today. Under specific conditions of the plasma turbulent state, the field fluctuations at small scales are able to trigger the dynamo instability. We study this transition considering the stability curve which shows a strong decrease in the critical magnetic Reynolds number for increasing inverse magnetic Prandlt number $\\textrm{Pm}^{-1}$ in the range $[10^{-6},1]$ and slows an increase in the range $[1,10^{8}]$. We...

  11. Quasars and the large-scale structure of the Universe

    International Nuclear Information System (INIS)

    A problem of studying the Universe large-scale structure is discussed. Last years the Zeldovitch hypothesis turns out the most fruitful in this area. According to the hypothesis formation of plane large-scale inhomogeneities, so-called pancakes, occurs under action of gravitation and shock waves arising at that. Numerical simulation of development processes of such long-wave gravitational instability by means of an electron computer has confirmed a hypothesis of pancakes as of stretched large-scale formations which can create cell structure in distribution of Galaxies. However the investigation into the Universe structure encounters a number of difficulties main of which is the absence of statistically reliable data on distances to galaxies. To overcome the difficulties scientists suggest to use quasars, which owing to extreme luminosity, are seen almost from the Universe boundary accessible for observations. The quasars present a possibility for revealing inhomogeneity in distributions of galaxies and for investigation of galaxy structures subjecting them to powerful radiation on a ray of sight

  12. Large scale cosmic-ray anisotropy with KASCADE

    CERN Document Server

    Antoni, T; Badea, A F; Bekk, K; Bercuci, A; Blümer, H; Bozdog, H; Brancus, I M; Büttner, C; Daumiller, K; Doll, P; Engel, R; Engler, J; Fessler, F; Gils, H J; Glasstetter, R; Haungs, A; Heck, D; Hörandel, J R; Kampert, K H; Klages, H O; Maier, G; Mathes, H J; Mayer, H J; Milke, J; Müller, M; Obenland, R; Oehlschläger, J; Ostapchenko, S; Petcu, M; Rebel, H; Risse, A; Risse, M; Roth, M; Schatz, G; Schieler, H; Scholz, J; Thouw, T; Ulrich, H; Van, J; Buren; Vardanyan, A S; Weindl, A; Wochele, J; Zabierowski, J

    2004-01-01

    The results of an analysis of the large scale anisotropy of cosmic rays in the PeV range are presented. The Rayleigh formalism is applied to the right ascension distribution of extensive air showers measured by the KASCADE experiment.The data set contains about 10^8 extensive air showers in the energy range from 0.7 to 6 PeV. No hints for anisotropy are visible in the right ascension distributions in this energy range. This accounts for all showers as well as for subsets containing showers induced by predominantly light respectively heavy primary particles. Upper flux limits for Rayleigh amplitudes are determined to be between 10^-3 at 0.7 PeV and 10^-2 at 6 PeV primary energy.

  13. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  14. Accelerated large-scale multiple sequence alignment

    Directory of Open Access Journals (Sweden)

    Lloyd Scott

    2011-12-01

    Full Text Available Abstract Background Multiple sequence alignment (MSA is a fundamental analysis method used in bioinformatics and many comparative genomic applications. Prior MSA acceleration attempts with reconfigurable computing have only addressed the first stage of progressive alignment and consequently exhibit performance limitations according to Amdahl's Law. This work is the first known to accelerate the third stage of progressive alignment on reconfigurable hardware. Results We reduce subgroups of aligned sequences into discrete profiles before they are pairwise aligned on the accelerator. Using an FPGA accelerator, an overall speedup of up to 150 has been demonstrated on a large data set when compared to a 2.4 GHz Core2 processor. Conclusions Our parallel algorithm and architecture accelerates large-scale MSA with reconfigurable computing and allows researchers to solve the larger problems that confront biologists today. Program source is available from http://dna.cs.byu.edu/msa/.

  15. Curvature constraints from Large Scale Structure

    CERN Document Server

    Di Dio, Enea; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-01-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter $\\Omega_K$ with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on the spatial curvature parameter estimation. We show that constraints on the curvature para...

  16. Radiations: large scale monitoring in Japan

    International Nuclear Information System (INIS)

    As the consequences of radioactive leaks on their health are a matter of concern for Japanese people, a large scale epidemiological study has been launched by the Fukushima medical university. It concerns the two millions inhabitants of the Fukushima Prefecture. On the national level and with the support of public funds, medical care and follow-up, as well as systematic controls are foreseen, notably to check the thyroid of 360.000 young people less than 18 year old and of 20.000 pregnant women in the Fukushima Prefecture. Some measurements have already been performed on young children. Despite the sometimes rather low measures, and because they know that some parts of the area are at least as much contaminated as it was the case around Chernobyl, some people are reluctant to go back home

  17. Large scale water lens for solar concentration.

    Science.gov (United States)

    Mondol, A S; Vogel, B; Bastian, G

    2015-06-01

    Properties of large scale water lenses for solar concentration were investigated. These lenses were built from readily available materials, normal tap water and hyper-elastic linear low density polyethylene foil. Exposed to sunlight, the focal lengths and light intensities in the focal spot were measured and calculated. Their optical properties were modeled with a raytracing software based on the lens shape. We have achieved a good match of experimental and theoretical data by considering wavelength dependent concentration factor, absorption and focal length. The change in light concentration as a function of water volume was examined via the resulting load on the foil and the corresponding change of shape. The latter was extracted from images and modeled by a finite element simulation.

  18. Large-Scale Tides in General Relativity

    CERN Document Server

    Ip, Hiu Yan

    2016-01-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lema\\^itre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation ...

  19. Introducing Large-Scale Innovation in Schools

    Science.gov (United States)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  20. Introducing Large-Scale Innovation in Schools

    Science.gov (United States)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-02-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  1. 大尺寸密闭空间内甲烷-空气混合过程的数值模拟%Numerical Simulation of the Mixing Process of Methane and Air in Large Scale Confined Spaces

    Institute of Scientific and Technical Information of China (English)

    王博; 李斌; 樊保龙; 白春华

    2014-01-01

    An axisymmetric straight tubes gas intake-mixing system was designed according to both the characteristics of a 10m3 versatile stepping explosion vessel and the air-gas intake-mixing integrated design philosophy. FLUENT, the software of computational fluid dynamics,was used to optimize specific design parameters of the gas intake-mixing system. The variation of the hole volumetric flow rate of gas pipe under different pore openings and hole spacing is obtained,showing that reducing the overall pore openings and hole spacing and adjusting the size of the individual holes can improve the consistency of hole volumetric flow rate. By this means,rapid,efficient and uniform gas mixing can be achieved.%根据10 m3多功能步入式爆炸罐的结构特点,依据进气-混气一体化的设计理念,建立了适用于10 m3爆炸罐的轴对称直管式进气-混气系统,并利用计算流体力学软件FLUENT,对进气-混气系统的具体设计参数进行了优化。通过模拟得到了直管上不同开孔孔径、开孔间距条件下小孔体积流量率的变化规律,发现减小整体开孔孔径、减小开孔间距、调整个别小孔大小能够提高小孔体积流量率的一致性,达到气体快速均匀混合的效果。

  2. Investigation of the Contamination Control in a Cleaning Room with a Moving AGV by 3D Large-Scale Simulation

    Directory of Open Access Journals (Sweden)

    Qing-He Yao

    2013-01-01

    Full Text Available The motions of the airflow induced by the movement of an automatic guided vehicle (AGV in a cleanroom are numerically studied by large-scale simulation. For this purpose, numerical experiments scheme based on domain decomposition method is designed. Compared with the related past research, the high Reynolds number is treated by large-scale computation in this work. A domain decomposition Lagrange-Galerkin method is employed to approximate the Navier-Stokes equations and the convection diffusion equation; the stiffness matrix is symmetric and an incomplete balancing preconditioned conjugate gradient (PCG method is employed to solve the linear algebra system iteratively. The end wall effects are readily viewed, and the necessity of the extension to 3 dimensions is confirmed. The effect of the high efficiency particular air (HEPA filter on contamination control is studied and the proper setting of the speed of the clean air flow is also investigated. More details of the recirculation zones are revealed by the 3D large-scale simulation.

  3. Large scale in vitro experiment system for 2 GHz exposure.

    Science.gov (United States)

    Iyama, Takahiro; Ebara, Hidetoshi; Tarusawa, Yoshiaki; Uebayashi, Shinji; Sekijima, Masaru; Nojima, Toshio; Miyakoshi, Junji

    2004-12-01

    A beam formed radiofrequency (RF) exposure-incubator employing a horn antenna, a dielectric lens, and a culture case in an anechoic chamber is developed for large scale in vitro studies. The combination of an open type RF exposure source and a culture case through which RF is transmitted realizes a uniform electric field (+/-1.5 dB) in a 300 x 300 mm area that accommodates 49 35 mm diameter culture dishes. This large culture dish area enables simultaneous RF exposure of a large number of cells or various cell lines. The RF exposure source operates at 2142.5 MHz corresponding to the middle frequency of the downlink band of the International Mobile Telecommunication 2000 (IMT-2000) cellular system. The dielectric lens, which has a gain of 7 dB, focuses RF energy in the direction of the culture case and provides a uniform electric field. The culture case is sealed and connected to the main unit for environmental control, located outside the anechoic chamber, via ducts. The temperature at the center of the tray, which contains the culture dishes in the culture room, is maintained at 37.0 +/- 0.2 degrees C by air circulation. In addition, the appropriate CO2 density and humidity supplied to the culture case realizes stable long-term culture conditions. Specific absorption rate (SAR) dosimetry is performed using an electric field measurement technique and the Finite Difference Time Domain (FDTD) calculation method. The results indicate that the mean SAR of the culture fluid at the bottom of the 49 (7 x 7 array) culture dishes used in the in vitro experiments is 0.175 W/kg for an antenna input power of 1 W and the standard deviation of the SAR distribution is 59%. When only 25 culture dishes (5 x 5 array) are evaluated, the mean SAR is 0.139 W/kg for the same antenna input power and the standard deviation of the SAR distribution is 47%. The proliferation of the H4 cell line in 72 h in a pair of RF exposure-incubators reveals that the culture conditions are equivalent to

  4. Climate variability rather than overstocking causes recent large scale cover changes of Tibetan pastures

    Science.gov (United States)

    Lehnert, L. W.; Wesche, K.; Trachte, K.; Reudenbach, C.; Bendix, J.

    2016-04-01

    The Tibetan Plateau (TP) is a globally important “water tower” that provides water for nearly 40% of the world’s population. This supply function is claimed to be threatened by pasture degradation on the TP and the associated loss of water regulation functions. However, neither potential large scale degradation changes nor their drivers are known. Here, we analyse trends in a high-resolution dataset of grassland cover to determine the interactions among vegetation dynamics, climate change and human impacts on the TP. The results reveal that vegetation changes have regionally different triggers: While the vegetation cover has increased since the year 2000 in the north-eastern part of the TP due to an increase in precipitation, it has declined in the central and western parts of the TP due to rising air temperature and declining precipitation. Increasing livestock numbers as a result of land use changes exacerbated the negative trends but were not their exclusive driver. Thus, we conclude that climate variability instead of overgrazing has been the primary cause for large scale vegetation cover changes on the TP since the new millennium. Since areas of positive and negative changes are almost equal in extent, pasture degradation is not generally proceeding.

  5. Large scale probabilistic available bandwidth estimation

    CERN Document Server

    Thouin, Frederic; Rabbat, Michael

    2010-01-01

    The common utilization-based definition of available bandwidth and many of the existing tools to estimate it suffer from several important weaknesses: i) most tools report a point estimate of average available bandwidth over a measurement interval and do not provide a confidence interval; ii) the commonly adopted models used to relate the available bandwidth metric to the measured data are invalid in almost all practical scenarios; iii) existing tools do not scale well and are not suited to the task of multi-path estimation in large-scale networks; iv) almost all tools use ad-hoc techniques to address measurement noise; and v) tools do not provide enough flexibility in terms of accuracy, overhead, latency and reliability to adapt to the requirements of various applications. In this paper we propose a new definition for available bandwidth and a novel framework that addresses these issues. We define probabilistic available bandwidth (PAB) as the largest input rate at which we can send a traffic flow along a pa...

  6. Large Scale Computer Simulation of Erthocyte Membranes

    Science.gov (United States)

    Harvey, Cameron; Revalee, Joel; Laradji, Mohamed

    2007-11-01

    The cell membrane is crucial to the life of the cell. Apart from partitioning the inner and outer environment of the cell, they also act as a support of complex and specialized molecular machinery, important for both the mechanical integrity of the cell, and its multitude of physiological functions. Due to its relative simplicity, the red blood cell has been a favorite experimental prototype for investigations of the structural and functional properties of the cell membrane. The erythrocyte membrane is a composite quasi two-dimensional structure composed essentially of a self-assembled fluid lipid bilayer and a polymerized protein meshwork, referred to as the cytoskeleton or membrane skeleton. In the case of the erythrocyte, the polymer meshwork is mainly composed of spectrin, anchored to the bilayer through specialized proteins. Using a coarse-grained model, recently developed by us, of self-assembled lipid membranes with implicit solvent and using soft-core potentials, we simulated large scale red-blood-cells bilayers with dimensions ˜ 10-1 μm^2, with explicit cytoskeleton. Our aim is to investigate the renormalization of the elastic properties of the bilayer due to the underlying spectrin meshwork.

  7. Curvature constraints from large scale structure

    Science.gov (United States)

    Di Dio, Enea; Montanari, Francesco; Raccanelli, Alvise; Durrer, Ruth; Kamionkowski, Marc; Lesgourgues, Julien

    2016-06-01

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter ΩK with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependent power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.

  8. Decomposition Methods for Large Scale LP Decoding

    CERN Document Server

    Barman, Siddharth; Draper, Stark C; Recht, Benjamin

    2012-01-01

    When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory, specifically the Alternating Directions Method of Multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a nearly linear time algorithm for two-norm projection onto the parity polytope. This allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for two LDPC codes. The first is the rate-0.5 [2...

  9. Food appropriation through large scale land acquisitions

    International Nuclear Information System (INIS)

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300–550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190–370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations. (letter)

  10. Large-Scale Clustering of Cosmic Voids

    CERN Document Server

    Chan, Kwan Chuen; Desjacques, Vincent

    2014-01-01

    We study the clustering of voids using $N$-body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias $b_{\\rm c} $ is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for $b_{\\rm c} $ is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii $\\gtrsim$ 30 Mpc/$h$, especially when the void biasing model is extended to 1-loop order. However, the best fit bias parameters do not agree well with the peak-background split results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys; our method enables us to treat the bias pa...

  11. Large scale production of tungsten-188

    International Nuclear Information System (INIS)

    Tungsten-188 is produced in a fission nuclear reactor with double neutron capture on 186W. The authors have explored large scale production yield (100-200 mCi) of 188W from ORNL-High Flux Isotope Reactor (HFIR) and compared this data with the experimental data available from other reactors and the theoretical calculations. The experimental yield of 188W at EOB from the HFIR operating at 85 MWt power and for one cycle irradiation (∼21 days) at the thermal neutron flux of 2x1015, n.s-1 cm-2 is 4 mCi/mg of 186W. This value is lower than the theoretical value by almost a factor of five. However, for one day irradiation at the Brookhaven High Flux Beam Reactor, the yield of 188W is lower than the theoretical value by a factor of two. Factors responsible for these low production yields and the yields of 187W intermediate radionuclide from several targets is discussed

  12. Large-scale wind turbine structures

    Science.gov (United States)

    Spera, David A.

    1988-05-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  13. Development of large-scale functional brain networks in children.

    Directory of Open Access Journals (Sweden)

    Kaustubh Supekar

    2009-07-01

    Full Text Available The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y and 22 young-adults (ages 19-22 y. Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.

  14. Development of large-scale functional brain networks in children.

    Science.gov (United States)

    Supekar, Kaustubh; Musen, Mark; Menon, Vinod

    2009-07-01

    The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y) and 22 young-adults (ages 19-22 y). Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism. PMID:19621066

  15. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  16. Developing Large-Scale Bayesian Networks by Composition

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale...

  17. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  18. Sensitivity technologies for large scale simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias (Rice University, Houston, TX); Wilcox, Lucas C. (Brown University, Providence, RI); Hill, Judith C. (Carnegie Mellon University, Pittsburgh, PA); Ghattas, Omar (Carnegie Mellon University, Pittsburgh, PA); Berggren, Martin Olof (University of UppSala, Sweden); Akcelik, Volkan (Carnegie Mellon University, Pittsburgh, PA); Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  19. Very Large-Scale Integrated Processor

    Directory of Open Access Journals (Sweden)

    Shigeyuki Takano

    2013-01-01

    Full Text Available In the near future, improvements in semiconductor technology will allow thousands of resources to be implementable on chip. However, a limitation remains for both single large-scale processors and many-core processors. For single processors, this limitation arises from their  design complexity, and regarding the many-core processors, an application is partitioned to several tasks and these partitioned tasks are mapped onto the cores. In this article,  we propose a dynamic chip multiprocessor (CMP model that consists of simple modules (realizing a low design complexity and does not require the application partitioning since the scale of the processor is dynamically variable, looking like up or down scale on demand. This model is based on prior work on adaptive processors that can gather and release resources on chip to dynamically form a processor. The adaptive processor takes a linear topology that realizes a locality based placement and replacement using processing elements themselves through a stack shift of information on the linear topology of the processing element array. Therefore, for the scaling of the processor, a linear topology of the interconnection network has to support the stack shift before and after the up- or down-scaling. Therefore, we propose an interconnection network architecture called a dynamic channel segmentation distribution (dynamic CSD network. In addition the linear topology must be folded on-chip into two-dimensional plane. We also propose a new conceptual topology and its cluster which is a unit of the new topology and is replicated on the chip. We analyzed the cost in terms of the available number of clusters (adaptive processors with a minimum scale and delay in Manhattan-distance of the chip, as well as its peak Giga-Operations per Second (GOPS across the process technology scaling.

  20. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  1. MUST code verification on the large scale benchmark problem

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Woon; Lee, Young Ouk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Hong, Ser Gi [Kyung Hee Univ., Yongin (Korea, Republic of)

    2012-10-15

    The MUST (Multi-group Unstructured geometry SN Transport) code has been developed to deal with a complex geometry using unstructured tetrahedral elements as a computational mesh, and has been tested on several test problems. In this paper, we applied the MUST code to the large scale (a few km size) benchmark problem, which is the DS02 Fat Man problem. Compared to the other test problems, the geometry is rather simple. However, we should consider the ray effects because the epicenter (burst point of the bomb) is modeled as a point source in the air. The source spectra, geometry data, and material compositions for the calculations are available in the DS02 report. The calculated neutron, secondary gamma, and primary gamma doses are compared with the reference results in the DS02 report.

  2. Large Scale Demand Response of Thermostatic Loads

    DEFF Research Database (Denmark)

    Totu, Luminita Cristiana

    This study is concerned with large populations of residential thermostatic loads (e.g. refrigerators, air conditioning or heat pumps). The purpose is to gain control over the aggregate power consumption in order to provide balancing services for the electrical grid. Without affecting the temperat...

  3. Very large-scale motions in a turbulent pipe flow

    Science.gov (United States)

    Lee, Jae Hwa; Jang, Seong Jae; Sung, Hyung Jin

    2011-11-01

    Direct numerical simulation of a turbulent pipe flow with ReD=35000 was performed to investigate the spatially coherent structures associated with very large-scale motions. The corresponding friction Reynolds number, based on pipe radius R, is R+=934, and the computational domain length is 30 R. The computed mean flow statistics agree well with previous DNS data at ReD=44000 and 24000. Inspection of the instantaneous fields and two-point correlation of the streamwise velocity fluctuations showed that the very long meandering motions exceeding 25R exist in logarithmic and wake regions, and the streamwise length scale is almost linearly increased up to y/R ~0.3, while the structures in the turbulent boundary layer only reach up to the edge of the log-layer. Time-resolved instantaneous fields revealed that the hairpin packet-like structures grow with continuous stretching along the streamwise direction and create the very large-scale structures with meandering in the spanwise direction, consistent with the previous conceptual model of Kim & Adrian (1999). This work was supported by the Creative Research Initiatives of NRF/MEST of Korea (No. 2011-0000423).

  4. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; Toth, Balazs; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Jomaas, Grunde

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  5. Safety aspects of large-scale combustion of hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Edeskuty, F.J.; Haugh, J.J.; Thompson, R.T.

    1986-01-01

    Recent hydrogen-safety investigations have studied the possible large-scale effects from phenomena such as the accumulation of combustible hydrogen-air mixtures in large, confined volumes. Of particular interest are safe methods for the disposal of the hydrogen and the pressures which can arise from its confined combustion. Consequently, tests of the confined combustion of hydrogen-air mixtures were conducted in a 2100 m/sup 3/ volume. These tests show that continuous combustion, as the hydrogen is generated, is a safe method for its disposal. It also has been seen that, for hydrogen concentrations up to 13 vol %, it is possible to predict maximum pressures that can occur upon ignition of premixed hydrogen-air atmospheres. In addition information has been obtained concerning the survivability of the equipment that is needed to recover from an accident involving hydrogen combustion. An accident that involved the inadvertent mixing of hydrogen and oxygen gases in a tube trailer gave evidence that under the proper conditions hydrogen combustion can transit to a detonation. If detonation occurs the pressures which can be experienced are much higher although short in duration.

  6. Large Scale Weather Control Using Nuclear Reactors

    CERN Document Server

    Singh-Modgil, M

    2002-01-01

    It is pointed out that controlled release of thermal energy from fission type nuclear reactors can be used to alter weather patterns over significantly large geographical regions. (1) Nuclear heat creates a low pressure region, which can be used to draw moist air from oceans, onto deserts. (2) Creation of low pressure zones over oceans using Nuclear heat can lead to Controlled Cyclone Creation (CCC).(3) Nuclear heat can also be used to melt glaciers and control water flow in rivers.

  7. Population generation for large-scale simulation

    Science.gov (United States)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  8. Using Large Scale Test Results for Pedagogical Purposes

    DEFF Research Database (Denmark)

    Dolin, Jens

    2012-01-01

    educational system and the different theoretical foundations of PISA and most teachers’ pedagogically oriented, formative assessment, thus explaining the teacher resentment towards LSTs. Finally, some principles for linking LSTs to teachers’ pedagogical practice will be presented.......The use and influence of large scale tests (LST), both national and international, has increased dramatically within the last decade. This process has revealed a tension between the legitimate need for information about the performance of the educational system and teachers to inform policy, and...... the teachers’ and students’ use of this information for pedagogical purposes in the classroom. We know well how the policy makers interpret and use the outcomes of such tests, but we know less about how teachers make use of LSTs to inform their pedagogical practice. An important question is whether...

  9. Morphological fluctuations of large-scale structure the PSCz survey

    CERN Document Server

    Kerscher, M; Schmalzing, J; Beisbart, C; Buchert, T; Wagner, H

    2001-01-01

    In a follow-up study to a previous analysis of the IRAS 1.2Jy catalogue, we quantify the morphological fluctuations in the PSCz survey. We use a variety of measures, among them the family of scalar Minkowski functionals. We confirm the existence of significant fluctuations that are discernible in volume-limited samples out to 200Mpc/h. In contrast to earlier findings, comparisons with cosmological N-body simulations reveal that the observed fluctuations roughly agree with the cosmic variance found in corresponding mock samples. While two-point measures, e.g. the variance of count-in-cells, fluctuate only mildly, the fluctuations in the morphology on large scales indicate the presence of coherent structures that are at least as large as the sample.

  10. Unfolding large-scale online collaborative human dynamics

    CERN Document Server

    Zha, Yilong; Zhou, Changsong

    2015-01-01

    Large-scale interacting human activities underlie all social and economic phenomena, but quantitative understanding of regular patterns and mechanism is very challenging and still rare. Self-organized online collaborative activities with precise record of event timing provide unprecedented opportunity. Our empirical analysis of the history of millions of updates in Wikipedia shows a universal double power-law distribution of time intervals between consecutive updates of an article. We then propose a generic model to unfold collaborative human activities into three modules: (i) individual behavior characterized by Poissonian initiation of an action, (ii) human interaction captured by a cascading response to others with a power-law waiting time, and (iii) population growth due to increasing number of interacting individuals. This unfolding allows us to obtain analytical formula that is fully supported by the universal patterns in empirical data. Our modeling approaches reveal "simplicity" beyond complex interac...

  11. Large-scale structure of time evolving citation networks

    Science.gov (United States)

    Leicht, E. A.; Clarkson, G.; Shedden, K.; Newman, M. E. J.

    2007-09-01

    In this paper we examine a number of methods for probing and understanding the large-scale structure of networks that evolve over time. We focus in particular on citation networks, networks of references between documents such as papers, patents, or court cases. We describe three different methods of analysis, one based on an expectation-maximization algorithm, one based on modularity optimization, and one based on eigenvector centrality. Using the network of citations between opinions of the United States Supreme Court as an example, we demonstrate how each of these methods can reveal significant structural divisions in the network and how, ultimately, the combination of all three can help us develop a coherent overall picture of the network's shape.

  12. In situ vitrification large-scale operational acceptance test analysis

    International Nuclear Information System (INIS)

    A thermal treatment process is currently under study to provide possible enhancement of in-place stabilization of transuranic and chemically contaminated soil sites. The process is known as in situ vitrification (ISV). In situ vitrification is a remedial action process that destroys solid and liquid organic contaminants and incorporates radionuclides into a glass-like material that renders contaminants substantially less mobile and less likely to impact the environment. A large-scale operational acceptance test (LSOAT) was recently completed in which more than 180 t of vitrified soil were produced in each of three adjacent settings. The LSOAT demonstrated that the process conforms to the functional design criteria necessary for the large-scale radioactive test (LSRT) to be conducted following verification of the performance capabilities of the process. The energy requirements and vitrified block size, shape, and mass are sufficiently equivalent to those predicted by the ISV mathematical model to confirm its usefulness as a predictive tool. The LSOAT demonstrated an electrode replacement technique, which can be used if an electrode fails, and techniques have been identified to minimize air oxidation, thereby extending electrode life. A statistical analysis was employed during the LSOAT to identify graphite collars and an insulative surface as successful cold cap subsidence techniques. The LSOAT also showed that even under worst-case conditions, the off-gas system exceeds the flow requirements necessary to maintain a negative pressure on the hood covering the area being vitrified. The retention of simulated radionuclides and chemicals in the soil and off-gas system exceeds requirements so that projected emissions are one to two orders of magnitude below the maximum permissible concentrations of contaminants at the stack

  13. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  14. Multitree Algorithms for Large-Scale Astrostatistics

    Science.gov (United States)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    this number every week, resulting in billions of objects. At such scales, even linear-time analysis operations present challenges, particularly since statistical analyses are inherently interactive processes, requiring that computations complete within some reasonable human attention span. The quadratic (or worse) runtimes of straightforward implementations become quickly unbearable. Examples of applications. These analysis subroutines occur ubiquitously in astrostatistical work. We list just a few examples. The need to cross-match objects across different catalogs has led to various algorithms, which at some point perform an AllNN computation. 2-point and higher-order spatial correlations for the basis of spatial statistics, and are utilized in astronomy to compare the spatial structures of two datasets, such as an observed sample and a theoretical sample, for example, forming the basis for two-sample hypothesis testing. Friends-of-friends clustering is often used to identify halos in data from astrophysical simulations. Minimum spanning tree properties have also been proposed as statistics of large-scale structure. Comparison of the distributions of different kinds of objects requires accurate density estimation, for which KDE is the overall statistical method of choice. The prediction of redshifts from optical data requires accurate regression, for which kernel regression is a powerful method. The identification of objects of various types in astronomy, such as stars versus galaxies, requires accurate classification, for which KDA is a powerful method. Overview. In this chapter, we will briefly sketch the main ideas behind recent fast algorithms which achieve, for example, linear runtimes for pairwise-distance problems, or similarly dramatic reductions in computational growth. In some cases, the runtime orders for these algorithms are mathematically provable statements, while in others we have only conjectures backed by experimental observations for the time being

  15. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  16. Networking in a Large-Scale Distributed Agile Project

    OpenAIRE

    Moe, Nils Brede; Šmite, Darja; Šāblis, Aivars; Börjesson, Anne-Lie; Andréasson, Pia

    2014-01-01

    Context: In large-scale distributed software projects the expertise may be scattered across multiple locations. Goal: We describe and discuss a large-scale distributed agile project at Ericsson, a multinational telecommunications company headquartered in Sweden. The project is distributed across four development locations (one in Sweden, one in Korea and two in China) and employs 17 teams. In such a large scale environment the challenge is to have as few dependences between teams as possible,...

  17. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  18. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  19. Penetration of Large Scale Electric Field to Inner Magnetosphere

    Science.gov (United States)

    Chen, S. H.; Fok, M. C. H.; Sibeck, D. G.; Wygant, J. R.; Spence, H. E.; Larsen, B.; Reeves, G. D.; Funsten, H. O.

    2015-12-01

    simulations reveal alternating penetration and shielding electric fields during the main phase of the geomagnetic storm, indicating an impulsive nature of the large scale penetrating electric field in regulating the gain and loss of radiation belt particles. We will present the statistical analysis and simulations results.

  20. The Large-scale Component of Mantle Convection

    Science.gov (United States)

    Cserepes, L.

    Circulation in the Earth's mantle occurs on multiple spatial scales: this review dis- cusses the character of its large-scale or global components. Direct and strong evi- dence concerning the global flow comes, first of all, from the pattern of plate motion. Further indirect observational data which can be transformed into flow velocities by the equation of motion are the internal density heterogeneities revealed by seismic to- mography, and the geoid can also be used as an observational constraint. Due to their limited spatial resolution, global tomographic data automatically filter out the small- scale features and are therefore relevant to the global flow pattern. Flow solutions obtained from tomographic models, using the plate motion as boundary condition, re- veal that subduction is the downwelling of the global mantle circulation and that the deep-rooted upwellings are concentrated in 2-3 superplumes. Spectral analysis of the tomographic heterogeneities shows that the power of global flow appears dominantly in the lowest spherical harmonic orders 2-5. Theoretical convection calculations con- tribute substantially to the understanding of global flow. If basal heating of the mantle is significant, numerical models can reproduce the basic 2 to 5 cell pattern of con- vection even without the inclusion of surface plates. If plates are superimposed on the solution with their present arrangement and motion, the dominance of these low spherical harmonic orders is more pronounced. The cells are not necessarily closed, rather they show chaotic time-dependence, but they are normally bordered by long downwelling features, and they have usually a single superplume in the cell interior. Swarms of small plumes can develop in the large cells, especially when convection is partially layered due to an internal boundary such as the 670 km discontinuity (source of small plumes). These small plumes are usually tilted by the background large-scale flow which shows that they are

  1. Stochastic pattern transitions in large scale swarms

    Science.gov (United States)

    Schwartz, Ira; Lindley, Brandon; Mier-Y-Teran, Luis

    2013-03-01

    We study the effects of time dependent noise and discrete, randomly distributed time delays on the dynamics of a large coupled system of self-propelling particles. Bifurcation analysis on a mean field approximation of the system reveals that the system possesses patterns with certain universal characteristics that depend on distinguished moments of the time delay distribution. We show both theoretically and numerically that although bifurcations of simple patterns, such as translations, change stability only as a function of the first moment of the time delay distribution, more complex bifurcating patterns depend on all of the moments of the delay distribution. In addition, we show that for sufficiently large values of the coupling strength and/or the mean time delay, there is a noise intensity threshold, dependent on the delay distribution width, that forces a transition of the swarm from a misaligned state into an aligned state. We show that this alignment transition exhibits hysteresis when the noise intensity is taken to be time dependent. Research supported by the Office of Naval Research

  2. Large-scale Changes in Marine Fog in a Warmer Climate

    Science.gov (United States)

    Kawai, H.; Koshiro, T.; Endo, H.

    2015-12-01

    Marine fog especially over the mid-latitude ocean is an important target in climate simulation because it affects maritime traffic in human lives and the sky obscuring marine fog is a contributor to the earth radiation budget due to its significant coverage. The purpose of the present study is to reveal global-scale changes in marine fog in a warmer climate. The changes in marine fog with climate change are investigated using AMIP, AMIP4K (where a uniform +4 K SST is added to the AMIP SSTs), and AMIPfuture (where a patterned SST perturbation is added to the AMIP SSTs) experiment data simulated by the MRI-CGCM3 (Yukimoto et al. 2012), which was used for CMIP5 runs. First, the representation of the fog in the model was examined using ship observation data and cloud mask data retrieved from CALIPSO satellite data (Kawai et al. 2015). The comparison showed that the MRI-CGCM3 can represent the climatological global distribution of marine fog relatively well. Basically marine fog represented by the model is warm air advection fog, and it was found that the change in the horizontal temperature advection near the surface mostly determines the changes in marine fog in a warmer climate. Therefore, the changes in marine fog can be almost explained by the large-scale circulation changes. On the other hand, in-cloud LWC (liquid water content) of the fog is consistently increased in a warmer climate for the same horizontal surface temperature advection. The changes in mid-latitude marine fog in both the northern and southern hemispheres and for both summer and winter seasons are discussed in connection with the large-scale circulation changes.

  3. Cash for Coolers : Evaluating a Large-Scale Appliance Replacement Program in Mexico

    OpenAIRE

    Lucas W. Davis; Fuchs, Alan; Gertler, Paul

    2014-01-01

    This paper evaluates a large-scale appliance replacement program in Mexico that from 2009 to 2012 helped 1.9 million households replace their old refrigerators and air conditioners with energy-efficient models. Using household-level billing records from the universe of Mexican residential customers, we find that refrigerator replacement reduces electricity consumption by 8 percent, about one-quarter of what was predicted by ex ante analyses. Moreover, we find that air conditioning replacement...

  4. Using Large-Scale Assessment Scores to Determine Student Grades

    Science.gov (United States)

    Miller, Tess

    2013-01-01

    Many Canadian provinces provide guidelines for teachers to determine students' final grades by combining a percentage of students' scores from provincial large-scale assessments with their term scores. This practice is thought to hold students accountable by motivating them to put effort into completing the large-scale assessment, thereby…

  5. Large-scale turbulence structures in shallow separating flows

    NARCIS (Netherlands)

    Talstra, H.

    2011-01-01

    The Ph.D. thesis “Large-scale turbulence structures in shallow separating flows” by Harmen Talstra is the result of a Ph.D. research project on large-scale shallow-flow turbulence, which has been performed in the Environmental Fluid Mechanics Laboratory at Delft University of Technology. The dynamic

  6. Safeguards instruments for Large-Scale Reprocessing Plants

    Energy Technology Data Exchange (ETDEWEB)

    Hakkila, E.A. [Los Alamos National Lab., NM (United States); Case, R.S.; Sonnier, C. [Sandia National Labs., Albuquerque, NM (United States)

    1993-06-01

    Between 1987 and 1992 a multi-national forum known as LASCAR (Large Scale Reprocessing Plant Safeguards) met to assist the IAEA in development of effective and efficient safeguards for large-scale reprocessing plants. The US provided considerable input for safeguards approaches and instrumentation. This paper reviews and updates instrumentation of importance in measuring plutonium and uranium in these facilities.

  7. ACTIVE DIMENSIONAL CONTROL OF LARGE-SCALED STEEL STRUCTURES

    OpenAIRE

    Radosław Rutkowski

    2013-01-01

    The article discusses the issues of dimensional control in the construction process of large-scaled steel structures. The main focus is on the analysis of manufacturing tolerances. The article presents the procedure of tolerance analysis usage in process of design and manufacturing of large-scaled steel structures. The proposed solution could significantly improve the manufacturing process.

  8. INTERNATIONAL WORKSHOP ON LARGE-SCALE REFORESTATION: PROCEEDINGS

    Science.gov (United States)

    The purpose of the workshop was to identify major operational and ecological considerations needed to successfully conduct large-scale reforestation projects throughout the forested regions of the world. Large-scale" for this workshop means projects where, by human effort, approx...

  9. SALSA ─ a Sectional Aerosol module for Large Scale Applications

    Directory of Open Access Journals (Sweden)

    A. Laaksonen

    2008-05-01

    Full Text Available The sectional aerosol module SALSA is introduced. The model has been designed to be implemented in large scale climate models, which require both accuracy and computational efficiency. We have used multiple methods to reduce the computational burden of different aerosol processes to optimize the model performance without losing physical features relevant to problematics of climate importance. The optimizations include limiting the chemical compounds and physical processes available in different size sections of aerosol particles; division of the size distribution into size sections using size sections of variable width depending on the sensitivity of microphysical processing to the particles sizes; the total amount of size sections to describe the size distribution is kept to the minimum; furthermore, only the relevant microphysical processes affecting each size section are calculated. The ability of the module to describe different microphysical processes was evaluated against explicit microphysical models and several microphysical models used in air quality models. The results from the current module show good consistency when compared to more explicit models. Also, the module was used to simulate a new particle formation event typical in highly polluted conditions with comparable results to more explicit model setup.

  10. SALSA – a Sectional Aerosol module for Large Scale Applications

    Directory of Open Access Journals (Sweden)

    A. Laaksonen

    2007-12-01

    Full Text Available The sectional aerosol module SALSA is introduced. The model has been designed to be implemented in large scale climate models, which require both accuracy and computational efficiency. We have used multiple methods to reduce the computational burden of different aerosol processes to optimize the model performance without losing physical features relevant to problematics of climate importance. The optimizations include limiting the chemical compounds and physical processes available in different size sections of aerosol particles; division of the size distribution into size sections using size sections of variable width depending on the sensitivity of microphysical processing to the particles sizes; the total amount of size sections to describe the size distribution is kept to the minimum; furthermore, only the relevant microphysical processes affecting each size section are calculated. The ability of the module to describe different microphysical processes was evaluated against explicit microphysical models and several microphysical models used in air quality models. The results from the current module show good consistency when compared to more explicit models. Also, the module was used to simulate a new particle formation event typical in highly polluted conditions with comparable results to a more explicit model setup.

  11. Economic modeling for large-scale urban tree planting

    International Nuclear Information System (INIS)

    Large-scale urban tree planting is advocated to conserve energy and improve environmental quality, yet little data exist to evaluate its economic and ecologic implications. This paper describes an economic-ecologic model applied to the Trees for Tucson/Global ReLeaf reforestation program. The program proposes planting 500,000 desert-adapted trees before 1996. The computer simulation accounts for planting locations, planting rates, growth rates, and mortality rates when projecting average annual benefits and costs. Projected net benefits are $236.5 million for the 40-year planning horizon. The benefit-cost ratio and internal rate of return for all trees is 2.6 and 7.11, respectively. Trees planted in parks are projected to provide the highest benefit-cost ratio (2.7) and trees along residential streets the lowest (2.2). Tree removal costs are the most important management expense and air conditioning energy savings provide the greatest benefits. Average annual cooling energy benefits per tree are projected to be 227 kWh ($16.34) for evapotranspirational cooling and 61 kWh ($4.39) for direct shade. Ninety-seven percent (464 lb) of the total carbon conserved annually per mature tree is attributed to reduced power plant emissions

  12. Large-scale epitaxial growth kinetics of graphene: A kinetic Monte Carlo study

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huijun; Hou, Zhonghuai, E-mail: hzhlj@ustc.edu.cn [Department of Chemical Physics and Hefei National Laboratory for Physical Sciences at Microscales, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2015-08-28

    Epitaxial growth via chemical vapor deposition is considered to be the most promising way towards synthesizing large area graphene with high quality. However, it remains a big theoretical challenge to reveal growth kinetics with atomically energetic and large-scale spatial information included. Here, we propose a minimal kinetic Monte Carlo model to address such an issue on an active catalyst surface with graphene/substrate lattice mismatch, which facilitates us to perform large scale simulations of the growth kinetics over two dimensional surface with growth fronts of complex shapes. A geometry-determined large-scale growth mechanism is revealed, where the rate-dominating event is found to be C{sub 1}-attachment for concave growth-front segments and C{sub 5}-attachment for others. This growth mechanism leads to an interesting time-resolved growth behavior which is well consistent with that observed in a recent scanning tunneling microscopy experiment.

  13. A Minimal Model for Large-scale Epitaxial Growth Kinetics of Graphene

    CERN Document Server

    Jiang, Huijun

    2015-01-01

    Epitaxial growth via chemical vapor deposition is considered to be the most promising way towards synthesizing large area graphene with high quality. However, it remains a big theoretical challenge to reveal growth kinetics with atomically energetic and large-scale spatial information included. Here, we propose a minimal kinetic Monte Carlo model to address such an issue on an active catalyst surface with graphene/substrate lattice mismatch, which facilitates us to perform large scale simulations of the growth kinetics over two dimensional surface with growth fronts of complex shapes. A geometry-determined large-scale growth mechanism is revealed, where the rate-dominating event is found to be $C_{1}$-attachment for concave growth front segments and $C_{5}$-attachment for others. This growth mechanism leads to an interesting time-resolved growth behavior which is well consistent with that observed in a recent scanning tunneling microscopy experiment.

  14. Probabilistic cartography of the large-scale structure

    CERN Document Server

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin

    2015-01-01

    The BORG algorithm is an inference engine that derives the initial conditions given a cosmological model and galaxy survey data, and produces physical reconstructions of the underlying large-scale structure by assimilating the data into the model. We present the application of BORG to real galaxy catalogs and describe the primordial and late-time large-scale structure in the considered volumes. We then show how these results can be used for building various probabilistic maps of the large-scale structure, with rigorous propagation of uncertainties. In particular, we study dynamic cosmic web elements and secondary effects in the cosmic microwave background.

  15. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  16. Large scale multiplex PCR improves pathogen detection by DNA microarrays

    Directory of Open Access Journals (Sweden)

    Krönke Martin

    2009-01-01

    Full Text Available Abstract Background Medium density DNA microchips that carry a collection of probes for a broad spectrum of pathogens, have the potential to be powerful tools for simultaneous species identification, detection of virulence factors and antimicrobial resistance determinants. However, their widespread use in microbiological diagnostics is limited by the problem of low pathogen numbers in clinical specimens revealing relatively low amounts of pathogen DNA. Results To increase the detection power of a fluorescence-based prototype-microarray designed to identify pathogenic microorganisms involved in sepsis, we propose a large scale multiplex PCR (LSplex PCR for amplification of several dozens of gene-segments of 9 pathogenic species. This protocol employs a large set of primer pairs, potentially able to amplify 800 different gene segments that correspond to the capture probes spotted on the microarray. The LSplex protocol is shown to selectively amplify only the gene segments corresponding to the specific pathogen present in the analyte. Application of LSplex increases the microarray detection of target templates by a factor of 100 to 1000. Conclusion Our data provide a proof of principle for the improvement of detection of pathogen DNA by microarray hybridization by using LSplex PCR.

  17. Silver nanoparticles: Large scale solvothermal synthesis and optical properties

    Energy Technology Data Exchange (ETDEWEB)

    Wani, Irshad A.; Khatoon, Sarvari [Nanochemistry Laboratory, Department of Chemistry, Jamia Millia Islamia, New Delhi 110025 (India); Ganguly, Aparna [Nanochemistry Laboratory, Department of Chemistry, Jamia Millia Islamia, New Delhi 110025 (India); Department of Chemistry, Indian Institute of Technology, Hauz Khas, New Delhi 110016 (India); Ahmed, Jahangeer; Ganguli, Ashok K. [Department of Chemistry, Indian Institute of Technology, Hauz Khas, New Delhi 110016 (India); Ahmad, Tokeer, E-mail: tokeer.ch@jmi.ac.in [Nanochemistry Laboratory, Department of Chemistry, Jamia Millia Islamia, New Delhi 110025 (India)

    2010-08-15

    Silver nanoparticles have been successfully synthesized by a simple and modified solvothermal method at large scale using ethanol as the refluxing solvent and NaBH{sub 4} as reducing agent. The nanopowder was investigated by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), dynamic light scattering (DLS), UV-visible and BET surface area studies. XRD studies reveal the monophasic nature of these highly crystalline silver nanoparticles. Transmission electron microscopic studies show the monodisperse and highly uniform nanoparticles of silver of the particle size of 5 nm, however, the size is found to be 7 nm using dynamic light scattering which is in good agreement with the TEM and X-ray line broadening studies. The surface area was found to be 34.5 m{sup 2}/g. UV-visible studies show the absorption band at {approx}425 nm due to surface plasmon resonance. The percentage yield of silver nanoparticles was found to be as high as 98.5%.

  18. Vorticity of IGM Velocity Field on Large Scales

    CERN Document Server

    Zhu, Weishan; Fang, Li-Zhi

    2010-01-01

    We investigate the vorticity of the IGM velocity field on large scales with cosmological hydrodynamic simulation of the concordance model of LCDM. We show that the vorticity field is significantly increasing with time as it can effectively be generated by shocks and complex structures in the IGM. Therefore, the vorticity field is an effective tool to reveal the nonlinear behavior of the IGM, especially the formation and evolution of turbulence in the IGM. We find that the vorticity field does not follow the filaments and sheets structures of underlying dark matter density field and shows highly non- Gaussian and intermittent features. The power spectrum of the vorticity field is used to measure the development of turbulence in Fourier space. We show that the relation between the power spectra of vorticity and velocity fields is perfectly in agreement with the prediction of a fully developed homogeneous and isotropic turbulence from 0.2 to 3 h^{-1} Mpc at z~0. This indicates that cosmic baryonic field is in th...

  19. Comparing statistical methods for constructing large scale gene networks.

    Directory of Open Access Journals (Sweden)

    Jeffrey D Allen

    Full Text Available The gene regulatory network (GRN reveals the regulatory relationships among genes and can provide a systematic understanding of molecular mechanisms underlying biological processes. The importance of computer simulations in understanding cellular processes is now widely accepted; a variety of algorithms have been developed to study these biological networks. The goal of this study is to provide a comprehensive evaluation and a practical guide to aid in choosing statistical methods for constructing large scale GRNs. Using both simulation studies and a real application in E. coli data, we compare different methods in terms of sensitivity and specificity in identifying the true connections and the hub genes, the ease of use, and computational speed. Our results show that these algorithms performed reasonably well, and each method has its own advantages: (1 GeneNet, WGCNA (Weighted Correlation Network Analysis, and ARACNE (Algorithm for the Reconstruction of Accurate Cellular Networks performed well in constructing the global network structure; (2 GeneNet and SPACE (Sparse PArtial Correlation Estimation performed well in identifying a few connections with high specificity.

  20. Alignment between galaxies and large-scale structure

    International Nuclear Information System (INIS)

    Based on the Sloan Digital Sky Survey DR6 (SDSS) and the Millennium Simulation (MS), we investigate the alignment between galaxies and large-scale structure. For this purpose, we develop two new statistical tools, namely the alignment correlation function and the cos(2θ)-statistic. The former is a two-dimensional extension of the traditional two-point correlation function and the latter is related to the ellipticity correlation function used for cosmic shear measurements. Both are based on the cross correlation between a sample of galaxies with orientations and a reference sample which represents the large-scale structure. We apply the new statistics to the SDSS galaxy catalog. The alignment correlation function reveals an overabundance of reference galaxies along the major axes of red, luminous (L ∼*) galaxies out to projected separations of 60 h-1 Mpc. The signal increases with central galaxy luminosity. No alignment signal is detected for blue galaxies. The cos(2θ)-statistic yields very similar results. Starting from a MS semi-analytic galaxy catalog, we assign an orientation to each red, luminous and central galaxy, based on that of the central region of the host halo (with size similar to that of the stellar galaxy). As an alternative, we use the orientation of the host halo itself. We find a mean projected misalignment between a halo and its central region of ∼ 25 deg. The misalignment decreases slightly with increasing luminosity of the central galaxy. Using the orientations and luminosities of the semi-analytic galaxies, we repeat our alignment analysis on mock surveys of the MS. Agreement with the SDSS results is good if the central orientations are used. Predictions using the halo orientations as proxies for central galaxy orientations overestimate the observed alignment by more than a factor of 2. Finally, the large volume of the MS allows us to generate a two-dimensional map of the alignment correlation function, which shows the reference galaxy

  1. Alignment between galaxies and large-scale structure

    Institute of Scientific and Technical Information of China (English)

    A. Faltenbacher; Cheng Li; Simon D. M. White; Yi-Peng Jing; Shu-De Mao; Jie Wang

    2009-01-01

    Based on the Sloan Digital Sky Survey DR6 (SDSS) and the Millennium Simulation (MS), we investigate the alignment between galaxies and large-scale struc-ture. For this purpose, we develop two new statistical tools, namely the alignment cor-relation function and the cos(20)-statistic. The former is a two-dimensional extension of the traditional two-point correlation function and the latter is related to the ellipticity correlation function used for cosmic shear measurements. Both are based on the cross correlation between a sample of galaxies with orientations and a reference sample which represents the large-scale structure. We apply the new statistics to the SDSS galaxy cat-alog. The alignment correlation function reveals an overabundance of reference galaxies along the major axes of red, luminous (L L*) galaxies out to projected separations of 60 h-1Mpc. The signal increases with central galaxy luminosity. No alignment signal is detected for blue galaxies. The cos(2θ)-statistic yields very similar results. Starting from a MS semi-analytic galaxy catalog, we assign an orientation to each red, luminous and central galaxy, based on that of the central region of the host halo (with size similar to that of the stellar galaxy). As an alternative, we use the orientation of the host halo itself. We find a mean projected misalignment between a halo and its central region of ~ 25°. The misalignment decreases slightly with increasing luminosity of the central galaxy. Using the orientations and luminosities of the semi-analytic galaxies, we repeat our alignment analysis on mock surveys of the MS. Agreement with the SDSS results is good if the central orientations are used. Predictions using the halo orientations as proxies for cen-tral galaxy orientations overestimate the observed alignment by more than a factor of 2. Finally, the large volume of the MS allows us to generate a two-dimensional map of the alignment correlation function, which shows the reference galaxy

  2. Polar night vortex breakdown and large-scale stirring in the southern stratosphere

    Energy Technology Data Exchange (ETDEWEB)

    Camara, Alvaro de la [Universidad Complutense de Madrid, Departamento de Geofisica y Meteorologia, Madrid (Spain); University of California, Department of Atmospheric and Oceanic Sciences, Los Angeles, CA (United States); Mechoso, C.R. [University of California, Department of Atmospheric and Oceanic Sciences, Los Angeles, CA (United States); Ide, K. [University of California, Department of Atmospheric and Oceanic Sciences, Los Angeles, CA (United States); University of Maryland, Department of Atmospheric and Oceanic Science, Collage Park, MD (United States); Walterscheid, R. [The Aerospace Corporation, Space Sciences Department, Los Angeles, CA (United States); Schubert, G. [University of California, Department of Earth and Space Sciences, Institute of Geophysics and Planetary Physics, Los Angeles, CA (United States)

    2010-11-15

    The present paper examines the vortex breakdown and large-scale stirring during the final warming of the Southern Hemisphere stratosphere during the spring of 2005. A unique set of in situ observations collected by 27 superpressure balloons (SPBs) is used. The balloons, which were launched from McMurdo, Antarctica, by the Strateole/VORCORE project, drifted for several weeks on two different isopycnic levels in the lower stratosphere. We describe balloon trajectories and compare them with simulations obtained on the basis of the velocity field from the GEOS-5 and NCEP/NCAR reanalyses performed with and without VORCORE data. To gain insight on the mechanisms responsible for the horizontal transport of air inside and outside the well-isolated vortex we examine the balloon trajectories in the framework of the Lagrangian properties of the stratospheric flow. Coherent structures of the flow are visualized by computing finite-time Lyapunov exponents (FTLE). A combination of isentropic analysis and FTLE distributions reveals that air is stripped away from the vortex's interior as stable manifolds eventually cross the vortex's edge. It is shown that two SPBs escaped from the vortex within high potential vorticity tongues that developed in association with wave breaking at locations along the vortex's edge where forward and backward FTLE maxima approximately intersect. The trajectories of three SPBs flying as a group at the same isopycnic level are examined and their behavior is interpreted in reference to the FTLE field. These results support the concept of stable and unstable manifolds governing transport of air masses across the periphery of the stratospheric polar vortex. (orig.)

  3. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU Qiang

    2004-01-01

    @@ The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.

  4. Some perspective on the Large Scale Scientific Computation Research

    Institute of Scientific and Technical Information of China (English)

    DU; Qiang

    2004-01-01

    The "Large Scale Scientific Computation (LSSC) Research"project is one of the State Major Basic Research projects funded by the Chinese Ministry of Science and Technology in the field ofinformation science and technology.……

  5. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Document Server

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  6. PetroChina to Expand Dushanzi Refinery on Large Scale

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    @@ A large-scale expansion project for PetroChina Dushanzi Petrochemical Company has been given the green light, a move which will make it one of the largest refineries and petrochemical complexes in the country.

  7. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives...

  8. Balancing modern Power System with large scale of wind power

    OpenAIRE

    Basit, Abdul; Altin, Müfit; Anca Daniela HANSEN; Sørensen, Poul Ejnar

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the s...

  9. On Large Scale Inductive Dimension of Asymptotic Resemblance Spaces

    OpenAIRE

    Kalantari, Sh.; Honari, B.

    2014-01-01

    We introduce the notion of large scale inductive dimension for asymptotic resemblance spaces. We prove that the large scale inductive dimension and the asymptotic dimensiongrad are equal in the class of r-convex metric spaces. This class contains the class of all geodesic metric spaces and all finitely generated groups. This leads to an answer for a question asked by E. Shchepin concerning the relation between the asymptotic inductive dimension and the asymptotic dimensiongrad, for r-convex m...

  10. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  11. Land consolidation for large-scale infrastructure projects in Germany

    OpenAIRE

    Hendrickss, Andreas; Lisec, Anka

    2014-01-01

    Large-scale infrastructure projects require the acquisition of appropriate land for their construction and maintenance, while they often cause extensive fragmentations of the affected landscape and land plots as well as significant land loss of the immediately affected land owners. A good practice in this field comes from Germany. In Germany, the so-called “land consolidation for large-scale projects” is used to distribute the land loss among a larger group of land own...

  12. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  13. Pool fires in a large scale ventilation system

    International Nuclear Information System (INIS)

    A series of pool fire experiments was carried out in the Large Scale Flow Facility of the Mechanical Engineering Department at New Mexico State University. The various experiments burned alcohol, hydraulic cutting oil, kerosene, and a mixture of kerosene and tributylphosphate. Gas temperature and wall temperature measurements as a function of time were made throughout the 23.3m3 burn compartment and the ducts of the ventilation system. The mass of the smoke particulate deposited upon the ventilation system 0.61m x 0.61m high efficiency particulate air filter for the hydraulic oil, kerosene, and kerosene-tributylphosphate mixture fires was measured using an in situ null balance. Significant increases in filter resistance were observed for all three fuels for burning time periods ranging from 10 to 30 minutes. This was found to be highly dependent upon initial ventilation system flow rate, fuel type, and flow configuration. The experimental results were compared to simulated results predicted by the Los Alamos National Laboratory FIRAC computer code. In general, the experimental and the computer results were in reasonable agreement, despite the fact that the fire compartment for the experiments was an insulated steel tank with 0.32 cm walls, while the compartment model FIRIN of FIRAC assumes 0.31 m thick concrete walls. This difference in configuration apparently caused FIRAC to consistently underpredict the measured temperatures in the fire compartment. The predicted deposition of soot proved to be insensitive to ventilation system flow rate, but the measured values showed flow rate dependence. However, predicted soot deposition was of the same order of magnitude as measured soot deposition

  14. Development of a Large-Scale Environmental Chamber for Investigating Soil Water Evaporation

    OpenAIRE

    Song, Weikang; Cui, Yu Jun; Tang, Anh Minh; DING, Wenqi

    2013-01-01

    A large-scale environmental chamber was developed to study soil water evaporation mechanisms. A large soil specimen (300 mm high, 800 mm wide, and 1000 mm long) was used, allowing sensors to be installed with minimal effect on the soil’s hydraulic properties. Sensors for measuring soil suction, temperature, and volumetric water content were either buried inside the soil specimen or installed on the chamber’s wall at various locations. Other sensors for monitoring air temperature, ...

  15. Large-scale-vortex dynamos in planar rotating convection

    CERN Document Server

    Guervilly, Céline; Jones, Chris A

    2016-01-01

    Several recent studies have demonstrated how large-scale vortices may arise spontaneously in rotating planar convection. Here we examine the dynamo properties of such flows in rotating Boussinesq convection. For moderate values of the magnetic Reynolds number ($100 \\lesssim Rm \\lesssim 550$, with $Rm$ based on the box depth and the convective velocity), a large-scale (i.e. system-size) magnetic field is generated. The amplitude of the magnetic energy oscillates in time, out of phase with the oscillating amplitude of the large-scale vortex. The dynamo mechanism relies on those components of the flow that have length scales lying between that of the large-scale vortex and the typical convective cell size; smaller-scale flows are not required. The large-scale vortex plays a crucial role in the magnetic induction despite being essentially two-dimensional. For larger magnetic Reynolds numbers, the dynamo is small scale, with a magnetic energy spectrum that peaks at the scale of the convective cells. In this case, ...

  16. Large-scale dynamos in rigidly rotating turbulent convection

    CERN Document Server

    Käpylä, P J; Brandenburg, A

    2008-01-01

    The existence of large-scale dynamos in rigidly rotating turbulent convection without shear is studied using three-dimensional numerical simulations of penetrative rotating compressible convection. We demonstrate that rotating convection in a Cartesian domain can drive a large-scale dynamo even in the absence of shear. The large-scale field contains a significant fraction of the total field in the saturated state. The simulation results are compared with one-dimensional mean-field dynamo models where turbulent transport coefficients, as determined using the test field method, are used. The reason for the absence of large-scale dynamo action in earlier studies is shown to be due to too slow rotation: whereas the alpha-effect can change sign, its magnitude stays approximately constant as a function of rotation, and the turbulent diffusivity decreases monotonically with increasing rotation. Only when rotation is rapid enough a large-scale dynamo can be excited. The one-dimensional mean-field model with dynamo co...

  17. Developments in large-scale coastal flood hazard mapping

    Science.gov (United States)

    Vousdoukas, Michalis I.; Voukouvalas, Evangelos; Mentaschi, Lorenzo; Dottori, Francesco; Giardino, Alessio; Bouziotas, Dimitrios; Bianchi, Alessandra; Salamon, Peter; Feyen, Luc

    2016-08-01

    Coastal flooding related to marine extreme events has severe socioeconomic impacts, and even though the latter are projected to increase under the changing climate, there is a clear deficit of information and predictive capacity related to coastal flood mapping. The present contribution reports on efforts towards a new methodology for mapping coastal flood hazard at European scale, combining (i) the contribution of waves to the total water level; (ii) improved inundation modeling; and (iii) an open, physics-based framework which can be constantly upgraded, whenever new and more accurate data become available. Four inundation approaches of gradually increasing complexity and computational costs were evaluated in terms of their applicability to large-scale coastal flooding mapping: static inundation (SM); a semi-dynamic method, considering the water volume discharge over the dykes (VD); the flood intensity index approach (Iw); and the model LISFLOOD-FP (LFP). A validation test performed against observed flood extents during the Xynthia storm event showed that SM and VD can lead to an overestimation of flood extents by 232 and 209 %, while Iw and LFP showed satisfactory predictive skill. Application at pan-European scale for the present-day 100-year event confirmed that static approaches can overestimate flood extents by 56 % compared to LFP; however, Iw can deliver results of reasonable accuracy in cases when reduced computational costs are a priority. Moreover, omitting the wave contribution in the extreme total water level (TWL) can result in a ˜ 60 % underestimation of the flooded area. The present findings have implications for impact assessment studies, since combination of the estimated inundation maps with population exposure maps revealed differences in the estimated number of people affected within the 20-70 % range.

  18. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  19. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  20. No Large Scale Curvature Perturbations during Waterfall of Hybrid Inflation

    CERN Document Server

    Abolhasani, Ali Akbar

    2010-01-01

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back-reactions to terminate inflation. If one considers only the classical evolution of the system we show analytically as well as numerically that the highly blue-tilted entropy perturbations induce highly blue-tilted large scale curvature perturbations during the waterfall phase transition which completely dominate over the original adiabatic curvature perturbations. However, we show that the quantum back-reactions of the waterfall field inhomogeneities produced during the phase transition become important before the classical non-linear back-reactions become relevant. The cumulative qua...

  1. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...... is very consistent, while the WMAP 9 year release appears more contaminated by non-CMB residuals than the 7 year release. The second part is concerned with the anomalies of the CMB from two approaches. One is based on an extended inflationary model as the origin of one specific large scale anomaly, namely....... Here we find evidence that the Planck CMB maps contain residual radiation in the loop areas, which can be linked to some of the large scale CMB anomalies: the point-parity asymmetry, the alignment of quadrupole and octupole and the dipolemodulation....

  2. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela;

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  3. A relativistic signature in large-scale structure

    Science.gov (United States)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  4. Large Scale Magnetohydrodynamic Dynamos from Cylindrical Differentially Rotating Flows

    CERN Document Server

    Ebrahimi, F

    2015-01-01

    For cylindrical differentially rotating plasmas threaded with a uniform vertical magnetic field, we study large-scale magnetic field generation from finite amplitude perturbations using analytic theory and direct numerical simulations. Analytically, we impose helical fluctuations, a seed field, and a background flow and use quasi-linear theory for a single mode. The predicted large-scale field growth agrees with numerical simulations in which the magnetorotational instability (MRI) arises naturally. The vertically and azimuthally averaged toroidal field is generated by a fluctuation-induced EMF that depends on differential rotation. Given fluctuations, the method also predicts large-scale field growth for MRI-stable rotation profiles and flows with no rotation but shear.

  5. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, Swen [ORNL; Elwasif, Wael R [ORNL; Naughton, III, Thomas J [ORNL; Vallee, Geoffroy R [ORNL

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  6. Human pescadillo induces large-scale chromatin unfolding

    Institute of Scientific and Technical Information of China (English)

    ZHANG Hao; FANG Yan; HUANG Cuifen; YANG Xiao; YE Qinong

    2005-01-01

    The human pescadillo gene encodes a protein with a BRCT domain. Pescadillo plays an important role in DNA synthesis, cell proliferation and transformation. Since BRCT domains have been shown to induce chromatin large-scale unfolding, we tested the role of Pescadillo in regulation of large-scale chromatin unfolding. To this end, we isolated the coding region of Pescadillo from human mammary MCF10A cells. Compared with the reported sequence, the isolated Pescadillo contains in-frame deletion from amino acid 580 to 582. Targeting the Pescadillo to an amplified, lac operator-containing chromosome region in the mammalian genome results in large-scale chromatin decondensation. This unfolding activity maps to the BRCT domain of Pescadillo. These data provide a new clue to understanding the vital role of Pescadillo.

  7. Coupling between convection and large-scale circulation

    Science.gov (United States)

    Becker, T.; Stevens, B. B.; Hohenegger, C.

    2014-12-01

    The ultimate drivers of convection - radiation, tropospheric humidity and surface fluxes - are altered both by the large-scale circulation and by convection itself. A quantity to which all drivers of convection contribute is moist static energy, or gross moist stability, respectively. Therefore, a variance analysis of the moist static energy budget in radiative-convective equilibrium helps understanding the interaction of precipitating convection and the large-scale environment. In addition, this method provides insights concerning the impact of convective aggregation on this coupling. As a starting point, the interaction is analyzed with a general circulation model, but a model intercomparison study using a hierarchy of models is planned. Effective coupling parameters will be derived from cloud resolving models and these will in turn be related to assumptions used to parameterize convection in large-scale models.

  8. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    Directory of Open Access Journals (Sweden)

    Anil Rao Pimplapure

    2013-06-01

    Full Text Available In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip based protocols for distributed monitoring and resource management that are suitable for large-scale networked systems. Results of our simulation studies indicate that, regardless of the system size and failure rates in the monitored system, gossip protocols incur a significantly larger overhead than tree-based protocols for achieving the same monitoring quality i.e., estimation accuracy or detection delay.

  9. Advances in methods and instruments for determining concentration of gaseous air pollutants in large-scaled livestock farms%规模化畜禽养殖污染气体现场检测方法与仪器研究进展

    Institute of Scientific and Technical Information of China (English)

    介邓飞; 泮进明; 应义斌

    2015-01-01

    With the development of Chinese large-scale and intensive livestock production, the animal stocking density increases, resulting in a lot of pollutant gas emissions. They are not only the important sources of greenhouse gas (GHG) emissions such as carbon dioxide (CO2), methane (CH4) , nitrous oxide (N2O), but also the sources of nitrogen or sulfur organic odors such as hydrogen sulfide (H2S), ammonia (NH3), mercaptan, phenol, paracresol, indole, skatole, which are produced by manure fermentation and decomposition. These gases contain large amounts of toxic and hazardous ingredients. If the concentration of pollution gas is small, the gas can be diluted by diffusion after emitting into the air. If the gas fails to be cleaned up or improperly handled, the odors will keep increasing and gather together. These gases will affect the growth of livestock and poultry, and they will cause environmental pollution, seriously affecting the air quality in urban and rural areas when these contaminated gases largely diffuse into the atmosphere. The environment could be deteriorating;the health of the feeders and residents also could be influenced. This paper reviewed the characteristics of several typical pollutant gases mainly including NH3, H2S, volatile organic compounds (VOCs) and other toxic and harmful gases. We also reviewed the research status of the pollution detection methods and analytical instruments at home and abroad for determining pollutant gas discharged from large-scale livestock and poultry breeding. Along with the development of computer technology, spectral analysis technology, sensor technology and wireless communication technology, there are more researchers applying the novel techniques on pollutant gas emissions from livestock farms in the domestic and foreign. This review synthesized the development of the methods and analyzers for the component and concentration detection of the gases emitted from livestock farms. According to the principal of the

  10. 规模化畜禽养殖污染气体现场检测方法与仪器研究进展%Advances in methods and instruments for determining concentration of gaseous air pollutants in large-scaled livestock farms

    Institute of Scientific and Technical Information of China (English)

    介邓飞; 泮进明; 应义斌

    2015-01-01

    With the development of Chinese large-scale and intensive livestock production, the animal stocking density increases, resulting in a lot of pollutant gas emissions. They are not only the important sources of greenhouse gas (GHG) emissions such as carbon dioxide (CO2), methane (CH4) , nitrous oxide (N2O), but also the sources of nitrogen or sulfur organic odors such as hydrogen sulfide (H2S), ammonia (NH3), mercaptan, phenol, paracresol, indole, skatole, which are produced by manure fermentation and decomposition. These gases contain large amounts of toxic and hazardous ingredients. If the concentration of pollution gas is small, the gas can be diluted by diffusion after emitting into the air. If the gas fails to be cleaned up or improperly handled, the odors will keep increasing and gather together. These gases will affect the growth of livestock and poultry, and they will cause environmental pollution, seriously affecting the air quality in urban and rural areas when these contaminated gases largely diffuse into the atmosphere. The environment could be deteriorating;the health of the feeders and residents also could be influenced. This paper reviewed the characteristics of several typical pollutant gases mainly including NH3, H2S, volatile organic compounds (VOCs) and other toxic and harmful gases. We also reviewed the research status of the pollution detection methods and analytical instruments at home and abroad for determining pollutant gas discharged from large-scale livestock and poultry breeding. Along with the development of computer technology, spectral analysis technology, sensor technology and wireless communication technology, there are more researchers applying the novel techniques on pollutant gas emissions from livestock farms in the domestic and foreign. This review synthesized the development of the methods and analyzers for the component and concentration detection of the gases emitted from livestock farms. According to the principal of the

  11. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    Science.gov (United States)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  12. Large-Scale Inverse Problems and Quantification of Uncertainty

    CERN Document Server

    Biegler, Lorenz; Ghattas, Omar

    2010-01-01

    Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca

  13. Large-scale ER-damper for seismic protection

    Science.gov (United States)

    McMahon, Scott; Makris, Nicos

    1997-05-01

    A large scale electrorheological (ER) damper has been designed, constructed, and tested. The damper consists of a main cylinder and a piston rod that pushes an ER-fluid through a number of stationary annular ducts. This damper is a scaled- up version of a prototype ER-damper which has been developed and extensively studied in the past. In this paper, results from comprehensive testing of the large-scale damper are presented, and the proposed theory developed for predicting the damper response is validated.

  14. The CLASSgal code for Relativistic Cosmological Large Scale Structure

    CERN Document Server

    Di Dio, Enea; Lesgourgues, Julien; Durrer, Ruth

    2013-01-01

    We present some accurate and efficient computations of large scale structure observables, obtained with a modified version of the CLASS code which is made publicly available. This code includes all relativistic corrections and computes both the power spectrum Cl(z1,z2) and the corresponding correlation function xi(theta,z1,z2) in linear perturbation theory. For Gaussian initial perturbations, these quantities contain the full information encoded in the large scale matter distribution at the level of linear perturbation theory. We illustrate the usefulness of our code for cosmological parameter estimation through a few simple examples.

  15. Optimal Dispatching of Large-scale Water Supply System

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper deals with the use of optimal control techniques in large-scale water distribution networks. According to the network characteristics and actual state of the water supply system in China, the implicit model, which may be solved by utilizing the hierarchical optimization method, is established. In special, based on the analyses of the water supply system containing variable-speed pumps, a software tool has been developed successfully. The application of this model to the city of Shenyang (China) is compared to experiential strategy. The results of this study show that the developed model is a very promising optimization method to control the large-scale water supply systems.

  16. Reliability Evaluation considering Structures of a Large Scale Wind Farm

    OpenAIRE

    Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei; Kim, Jin-O

    2012-01-01

    Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and thescale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability evaluation on wind farm is necessarily required. Also, because large scale offshore wind farm has along repair time and a high repair cost as well as a high investment cost, it is essential to take into...

  17. Large-scale liquid scintillation detectors for solar neutrinos

    Energy Technology Data Exchange (ETDEWEB)

    Benziger, Jay B.; Calaprice, Frank P. [Princeton University Princeton, Princeton, NJ (United States)

    2016-04-15

    Large-scale liquid scintillation detectors are capable of providing spectral yields of the low energy solar neutrinos. These detectors require > 100 tons of liquid scintillator with high optical and radiopurity. In this paper requirements for low-energy neutrino detection by liquid scintillation are specified and the procedures to achieve low backgrounds in large-scale liquid scintillation detectors for solar neutrinos are reviewed. The designs, operations and achievements of Borexino, KamLAND and SNO+ in measuring the low-energy solar neutrino fluxes are reviewed. (orig.)

  18. Report of the LASCAR forum: Large scale reprocessing plant safeguards

    International Nuclear Information System (INIS)

    This report has been prepared to provide information on the studies which were carried out from 1988 to 1992 under the auspices of the multinational forum known as Large Scale Reprocessing Plant Safeguards (LASCAR) on safeguards for four large scale reprocessing plants operated or planned to be operated in the 1990s. The report summarizes all of the essential results of these studies. The participants in LASCAR were from France, Germany, Japan, the United Kingdom, the United States of America, the Commission of the European Communities - Euratom, and the International Atomic Energy Agency

  19. Clearing and Labeling Techniques for Large-Scale Biological Tissues.

    Science.gov (United States)

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-06-30

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  20. Reactor vessel integrity analysis based upon large scale test results

    International Nuclear Information System (INIS)

    The fracture mechanics analysis of a nuclear reactor pressure vessel is discussed to illustrate the impact of knowledge gained by large scale testing on the demonstration of the integrity of such a vessel. The analysis must be able to predict crack initiation, arrest and reinitiation. The basis for the capability to make each prediction, including the large scale test information which is judged appropriate, is identified and the confidence in the applicability of the experimental data to a vessel is discussed. Where there is inadequate data to make a prediction with confidence or where there are apparently conflicting data, recommendations for future testing are presented. 15 refs., 6 figs.. 1 tab

  1. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising...... with high wind power penetration. This paper presents a review of the electricity storage technologies relevant for large power systems. The paper also presents an estimation of the economic feasibility of electricity storage using the west Danish power market area as a case....

  2. Practical Large Scale Syntheses of New Drug Candidates

    Institute of Scientific and Technical Information of China (English)

    Hui-Yin Li

    2001-01-01

    @@ This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (>99% de, >99 A% and >99 wt%). Some practical and GMP aspects of process development will be also discussed.

  3. Practical Large Scale Syntheses of New Drug Candidates

    Institute of Scientific and Technical Information of China (English)

    Hui-Yin; Li

    2001-01-01

    This presentation will be focus on Practical large scale syntheses of lead compounds and drug candidates from three major therapeutic areas from DuPont Pharmaceuticals Research Laboratory: 1). DMP777-a selective, non-toxic, orally active human elastase inhibitor; 2). DMP754-a potent glycoprotein IIb/IIIa antagonist; 3). R-Wafarin-the pure enantiomeric form of wafarin. The key technology used for preparation these drug candidates is asymmetric hydrogenation under very mild reaction conditions, which produced very high quality final products at large scale (>99% de, >99 A% and >99 wt%). Some practical and GMP aspects of process development will be also discussed.……

  4. Cost Overruns in Large-scale Transportation Infrastructure Projects

    DEFF Research Database (Denmark)

    Cantarelli, Chantal C; Flyvbjerg, Bent; Molin, Eric J. E;

    2010-01-01

    Managing large-scale transportation infrastructure projects is difficult due to frequent misinformation about the costs which results in large cost overruns that often threaten the overall project viability. This paper investigates the explanations for cost overruns that are given in the literature...

  5. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  6. The large scale microwave background anisotropy in decaying particle cosmology

    International Nuclear Information System (INIS)

    We investigate the large-scale anisotropy of the microwave background radiation in cosmological models with decaying particles. The observed value of the quadrupole moment combined with other constraints gives an upper limit on the redshift of the decay z/sub d/ < 3-5. 12 refs., 2 figs

  7. How large-scale subsidence affects stratocumulus transitions (discussion paper)

    NARCIS (Netherlands)

    Van der Dussen, J.J.; De Roode, S.R.; Siebesma, A.P.

    2015-01-01

    Some climate modeling results suggest that the Hadley circulation might weaken in a future climate, causing a subsequent reduction in the large-scale subsidence velocity in the subtropics. In this study we analyze the cloud liquid water path (LWP) budget from large-eddy simulation (LES) results of t

  8. Main Achievements of Cotton Large-scale Transformation System

    Institute of Scientific and Technical Information of China (English)

    LI Fu-guang; LIU Chuan-liang; WU Zhi-xia; ZHANG Chao-jun; ZHANG Xue-yan

    2008-01-01

    @@ Cotton large-scale transformation methods system was established based on innovation of cotton transformation methods.It obtains 8000 transgenic cotton plants per year by combining Agrobacteriurn turnefaciens-mediated,pollen-tube pathway and biolistic methods together efficiently.More than 1000 transgenie lines are selected from the transgenic plants with molecular assistant breeding and conventional breeding methods.

  9. Temporal Variation of Large Scale Flows in the Solar Interior

    Indian Academy of Sciences (India)

    Sarbani Basu; H. M. Antia

    2000-09-01

    We attempt to detect short-term temporal variations in the rotation rate and other large scale velocity fields in the outer part of the solar convection zone using the ring diagram technique applied to Michelson Doppler Imager (MDI) data. The measured velocity field shows variations by about 10 m/s on the scale of few days.

  10. Large-scale search for dark-matter axions

    Energy Technology Data Exchange (ETDEWEB)

    Kinion, D; van Bibber, K

    2000-08-30

    We review the status of two ongoing large-scale searches for axions which may constitute the dark matter of our Milky Way halo. The experiments are based on the microwave cavity technique proposed by Sikivie, and marks a ''second-generation'' to the original experiments performed by the Rochester-Brookhaven-Fermilab collaboration, and the University of Florida group.

  11. Breakdown of large-scale circulation in turbulent rotating convection

    NARCIS (Netherlands)

    Kunnen, R.P.J.; Clercx, H.J.H.; Geurts, B.J.

    2008-01-01

    Turbulent rotating convection in a cylinder is investigated both numerically and experimentally at Rayleigh number Ra=109 and Prandtl number σ=6.4. In this letter we discuss two topics: the breakdown under rotation of the domain-filling large-scale circulation (LSC) typical for confined convection,

  12. Chirping for large-scale maritime archaeological survey

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2014-01-01

    -resolution subbottom profilers. This paper presents a strategy for cost-effective, large-scale mapping of previously undetected sediment-embedded sites and wrecks based on subbottom profiling with chirp systems. The mapping strategy described includes (a) definition of line spacing depending on the target; (b...

  13. Steel Enterprises to the Global Large-Scale

    Institute of Scientific and Technical Information of China (English)

    He Xin

    2009-01-01

    @@ From the market structure, the steel industry was consid-ered as backbone of a country's industries in the past. In a single country, generally there will be phenomenon of monopoly, therefore, compared to other industries, global-ization and large-scale of steel industry have been lagged behind.

  14. Large-scale search for dark-matter axions

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C.A., LLNL; Kinion, D.; Stoeffl, W.; Van Bibber, K.; Daw, E.J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); McBride, J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Peng, H. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Rosenberg, L.J. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Xin, H. [Massachusetts Inst. of Tech., Cambridge, MA (United States); Laveigne, J. [Florida Univ., Gainesville, FL (United States); Sikivie, P. [Florida Univ., Gainesville, FL (United States); Sullivan, N.S. [Florida Univ., Gainesville, FL (United States); Tanner, D.B. [Florida Univ., Gainesville, FL (United States); Moltz, D.M. [Lawrence Berkeley Lab., CA (United States); Powell, J. [Lawrence Berkeley Lab., CA (United States); Clarke, J. [Lawrence Berkeley Lab., CA (United States); Nezrick, F.A. [Fermi National Accelerator Lab., Batavia, IL (United States); Turner, M.S. [Fermi National Accelerator Lab., Batavia, IL (United States); Golubev, N.A. [Russian Academy of Sciences, Moscow (Russia); Kravchuk, L.V. [Russian Academy of Sciences, Moscow (Russia)

    1998-01-01

    Early results from a large-scale search for dark matter axions are presented. In this experiment, axions constituting our dark-matter halo may be resonantly converted to monochromatic microwave photons in a high-Q microwave cavity permeated by a strong magnetic field. Sensitivity at the level of one important axion model (KSVZ) has been demonstrated.

  15. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  16. Water Implications of Large-Scale Land Acquisitions in Ghana

    Directory of Open Access Journals (Sweden)

    Timothy Olalekan Williams

    2012-06-01

    The paper offers recommendations which can help the government to achieve its stated objective of developing a "policy framework and guidelines for large-scale land acquisitions by both local and foreign investors for biofuels that will protect the interests of investors and the welfare of Ghanaian farmers and landowners".

  17. The Role of Plausible Values in Large-Scale Surveys

    Science.gov (United States)

    Wu, Margaret

    2005-01-01

    In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1) address…

  18. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  19. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from sm...

  20. Large-scale Homogenization of Bulk Materials in Mammoth Silos

    NARCIS (Netherlands)

    Schott, D.L.

    2004-01-01

    This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the homogeniza

  1. Large-scale prediction of drug-target relationships

    DEFF Research Database (Denmark)

    Kuhn, Michael; Campillos, Mónica; González, Paula;

    2008-01-01

    also provides a more global view on drug-target relations. Here we review recent attempts to apply large-scale computational analyses to predict novel interactions of drugs and targets from molecular and cellular features. In this context, we quantify the family-dependent probability of two proteins to...

  2. Large-Scale Innovation and Change in UK Higher Education

    Science.gov (United States)

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  3. International Large-Scale Assessments: What Uses, What Consequences?

    Science.gov (United States)

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  4. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  5. CACHE Guidelines for Large-Scale Computer Programs.

    Science.gov (United States)

    National Academy of Engineering, Washington, DC. Commission on Education.

    The Computer Aids for Chemical Engineering Education (CACHE) guidelines identify desirable features of large-scale computer programs including running cost and running-time limit. Also discussed are programming standards, documentation, program installation, system requirements, program testing, and program distribution. Lists of types of…

  6. Over-driven control for large-scale MR dampers

    International Nuclear Information System (INIS)

    As semi-active electro-mechanical control devices increase in scale for use in real-world civil engineering applications, their dynamics become increasingly complicated. Control designs that are able to take these characteristics into account will be more effective in achieving good performance. Large-scale magnetorheological (MR) dampers exhibit a significant time lag in their force–response to voltage inputs, reducing the efficacy of typical controllers designed for smaller scale devices where the lag is negligible. A new control algorithm is presented for large-scale MR devices that uses over-driving and back-driving of the commands to overcome the challenges associated with the dynamics of these large-scale MR dampers. An illustrative numerical example is considered to demonstrate the controller performance. Via simulations of the structure using several seismic ground motions, the merits of the proposed control strategy to achieve reductions in various response parameters are examined and compared against several accepted control algorithms. Experimental evidence is provided to validate the improved capabilities of the proposed controller in achieving the desired control force levels. Through real-time hybrid simulation (RTHS), the proposed controllers are also examined and experimentally evaluated in terms of their efficacy and robust performance. The results demonstrate that the proposed control strategy has superior performance over typical control algorithms when paired with a large-scale MR damper, and is robust for structural control applications. (paper)

  7. Large-scale data analysis using the Wigner function

    Science.gov (United States)

    Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.

    2012-04-01

    Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.

  8. Geophysical mapping of complex glaciogenic large-scale structures

    DEFF Research Database (Denmark)

    Høyer, Anne-Sophie

    2013-01-01

    This thesis presents the main results of a four year PhD study concerning the use of geophysical data in geological mapping. The study is related to the Geocenter project, “KOMPLEKS”, which focuses on the mapping of complex, large-scale geological structures. The study area is approximately 100 km2...

  9. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  10. Flexibility in design of large-scale methanol plants

    Institute of Scientific and Technical Information of China (English)

    Esben Lauge Sφrensen; Helge Holm-Larsen; Haldor Topsφe A/S

    2006-01-01

    This paper presents a cost effective design for large-scale methanol production. It is demonstrated how recent technological progress can be utilised to design a methanol plant,which is inexpensive and easy to operate, while at the same time very robust towards variations in feed-stock composition and product specifications.

  11. Resilience of Florida Keys coral communities following large scale disturbances

    Science.gov (United States)

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  12. AC loss in large-scale superconducting cables

    NARCIS (Netherlands)

    Mulder, G.B.J.

    1993-01-01

    A review is given of recent work on ac losses, carried out at our institute. The emphasis is on large-scale conductors for fusion applications, such as the `cable-in-conduit¿ prototype conductors to be used for NET. Calculation methods for the ac losses are presented together with some experimental

  13. Main Achievements of Cotton Large-scale Transformation System

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Cotton large-scale transformation methods system was established based on innovation of cotton transformation methods.It obtains 8000 transgenic cotton plants per year by combining Agrobacterium tumefaciens-mediated,pollen-tube pathway and biolistic methods together efficiently.More than

  14. Global aridification in the second half of the 20th century and its relationship to large-scale climate background

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The variation in surface wetness index (SWI), which was derived from global gridded monthly precipi- tation and monthly mean surface air temperature datasets of Climatic Research Unit (CRU), from 1951― 2002 over global land was analyzed in this paper. The characteristics of the SWI variation in global continents, such as North America, South America, Eurasia, Africa, and Australia, were compared. In addition, the correlation between the SWI variation of each continent (or across the globe) and the large-scale background closely related to SST variations, which affects climate change, was analyzed. The results indicate that the SWI variation shows distinct regional characteristics in the second half of the 20th century under global warming. A drying trend in the last 52 years occurred in Africa, Eurasia, Australia and South America, most obviously in Africa and Eurasia. North America shows a wetting trend after 1976. A 30-year period of dry-wet oscillation is found in South America and Australia; the latest is in a drying period in two regions. The results also revealed that global warming has changed the dry-wet pattern of the global land. South America and Australia have a drying trend despite in- creases in precipitation. This indicates that increases in surface air temperature cannot be ignored in aridification studies. Global dry-wet variation is closely related to large-scale SST variations: the drying trend in Africa and Eurasia and the wetting trend in North America are correlated with Pacific Decadal Oscillation (PDO); the interdecadal oscillation of SWI in South America and Australia is consistent with the interdecadal variation in Southern Oscillation Index (SOI).

  15. Global aridification in the second half of the 20th century and its relationship to large-scale climate background

    Institute of Scientific and Technical Information of China (English)

    MA ZhuGuo; FU CongBin

    2007-01-01

    The variation in surface wetness index (SWI), which was derived from global gridded monthly precipitation and monthly mean surface air temperature datasets of Climatic Research Unit (CRU), from 1951 -2002 over global land was analyzed in this paper. The characteristics of the SWI variation in global continents, such as North America, South America, Eurasia, Africa, and Australia, were compared. In addition, the correlation between the SWI variation of each continent (or across the globe) and the large-scale background closely related to SST variations, which affects climate change, was analyzed.The results indicate that the SWl variation shows distinct regional characteristics in the second half of the 20th century under global warming. A drying trend in the last 52 years occurred in Africa, Eurasia,Australia and South America, most obviously in Africa and Eurasia. North America shows a wetting trend after 1976. A 30-year period of dry-wet oscillation is found in South America and Australia; the latest is in a drying period in two regions. The results also revealed that global warming has changed the dry-wet pattern of the global land. South America and Australia have a drying trend despite increases in precipitation. This indicates that increases in surface air temperature cannot be ignored in aridification studies. Global dry-wet variation is closely related to large-scale SST variations: the drying trend in Africa and Eurasia and the wetting trend in North America are correlated with Pacific Decadal Oscillation (PDO); the interdecadal oscillation of SWl in South America and Australia is consistent with the interdecadal variation in Southern Oscillation Index (SOI).

  16. Aerosols in large-scale atmospheric models: Future directions and needs

    Energy Technology Data Exchange (ETDEWEB)

    Kerminen, V.M.; Korhonen, H. [Finnish Meteorological Institute, Helsinki (Finland)

    2004-07-01

    Large-scale atmospheric models range from regional air quality models to global chemical transport and/or climate models. The treatment of aerosol particles in such models was very crude in the past, as most models included only the sulfate aerosol or some other major aerosol type such as sea-salt or dust. The only predicted aerosol parameter in these models was the total mass concentration of each aerosol type. More recent models have aimed to predict the mass size distribution of relevant chemical components in the particulate phase. The application of large-scale atmospheric models has shifted gradually from acid deposition and visibility studies toward investigating the climate change and various health effects caused by air pollution. As a result, new requirements for these models and their structures have appeared. In the following we will discuss briefly what this means in terms of treating aerosols in large-scale atmospheric models, and what implications this further has on doing aerosol measurements.

  17. A novel computational approach towards the certification of large-scale boson sampling

    Science.gov (United States)

    Huh, Joonsuk

    Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.

  18. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  19. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  20. A first large-scale flood inundation forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  1. Large-scale commercial combustion systems for producing energy from municipal solid waste

    Science.gov (United States)

    1985-02-01

    The large scale combustion systems available on the US market today that use MSW only as fuel are reviewed. Its purpose is to provide waste to energy project participants with basic technical information to facilitate an understanding of the operation and performance of the technologies employed. General technical descriptions of the two types of large scale systems, mass burning and refuel derived fuel burning, are presented. Performance characteristics of each system, based on material and energy balances, are discussed. A description of the typical energy product options (steam only, cogeneration of steam and electricity, or ejectricity only) that may be considered for both types of systems depending on the available market(s) for energy is included. The sources and types of emissions from these systems (air, water, ash and other residue, noise, and odor) are discussed.

  2. Large-Scale, Highly Efficient, and Green Liquid-Exfoliation of Black Phosphorus in Ionic Liquids.

    Science.gov (United States)

    Zhao, Wancheng; Xue, Zhimin; Wang, Jinfang; Jiang, Jingyun; Zhao, Xinhui; Mu, Tiancheng

    2015-12-23

    We developed a facile, large-scale, and environmentally friendly liquid-exfoliation method to produce stable and high-concentration dispersions of mono- to few-layer black phosphorus (BP) nanosheets from bulk BP using nine ionic liquids. The prepared suspensions can stabilize without any obvious sedimentation and aggregation in ambient air for one month. In particular, the concentration (up to 0.95 mg mL(-1)) of BP nanoflakes obtained in 1-hydroxyethyl-3-methylimidazolium trifluoromethansulfonate ([HOEMIM][TfO]) is the highest reported for BP nanosheets dispersions. This work provides new opportunities for preparing atomically thin BP nanosheets in green, large-scale, and highly concentrated processes and achieving its in situ application. PMID:26642883

  3. Modeled large-scale warming impacts on summer California coastal-cooling trends

    Science.gov (United States)

    Lebassi-Habtezion, Bereket; GonzáLez, Jorge; Bornstein, Robert

    2011-10-01

    Regional Atmospheric Modeling System (RAMS) meso-meteorological model simulations with a horizontal grid resolution of 4 km on an inner grid over the South Coast Air Basin of California were used to investigate effects from long-term (i.e., past 35 years) large-scale warming impacts on coastal flows. Comparison of present- and past-climate simulations showed significant increases in summer daytime sea breeze activity by up to 1.5 m s-1 (in the onshore component) and a concurrent coastal cooling of average-daily peak temperatures of up to -1.6°C, both of which support observations that the latter is an indirect "reverse reaction" to the large-scale warming of inland areas.

  4. Large-scale urbanization effects on eastern Asian summer monsoon circulation and climate

    Science.gov (United States)

    Chen, Haishan; Zhang, Ye; Yu, Miao; Hua, Wenjian; Sun, Shanlei; Li, Xing; Gao, Chujie

    2016-07-01

    Impacts of large-scale urbanization over eastern China on East Asian summer monsoon circulation and climate are investigated by comparing three 25-year climate simulations with and without incorporating modified land cover maps reflecting two different idealized large-scale urbanization scenarios. The global atmospheric general circulation model CAM4.0 that includes an urban canopy parameterization scheme is employed in this study. The large-scale urbanization over eastern China leads to a significant warming over most of the expanded urban areas, characterized by an increase of 3 K for surface skin temperature, 2.25 K for surface air temperature, significant warming of both daily minimum and daily maximum air temperatures, and 0.4 K for the averaged urban-rural temperature difference. The urbanization is also accompanied by an increase in surface sensible heat flux, a decrease of the net surface shortwave and long-wave radiation, and an enhanced surface thermal heating to the atmosphere in most Eastern Asia areas. It is noted that the responses of the East Asian summer monsoon circulation exhibits an evident month-to-month variation. Across eastern China, the summer monsoon in early summer is strengthened by the large-scale urbanization, but weakened (intensified) over southern (northern) part of East Asia in late summer. Meanwhile, early summer precipitation is intensified in northern and northeastern China and suppressed in south of ~35°N, but late summer precipitation is evidently suppressed over northeast China, the Korean Peninsula and Japan with enhancements in southern China, the South China Sea, and the oceanic region south and southeast of the Taiwan Island. This study highlights the evidently distinct month-to-month responses of the monsoon system to the large-scale urbanization, which might be attributed to different basic states, internal feedbacks (cloud, rainfall) as well as a dynamic adjustment of the atmosphere. Further investigation is required

  5. Large-scale quantification of CVD graphene surface coverage

    Science.gov (United States)

    Ambrosi, Adriano; Bonanni, Alessandra; Sofer, Zdeněk; Pumera, Martin

    2013-02-01

    The extraordinary properties demonstrated for graphene and graphene-related materials can be fully exploited when a large-scale fabrication procedure is made available. Chemical vapor deposition (CVD) of graphene on Cu and Ni substrates is one of the most promising procedures to synthesize large-area and good quality graphene films. Parallel to the fabrication process, a large-scale quality monitoring technique is equally crucial. We demonstrate here a rapid and simple methodology that is able to probe the effectiveness of the growth process over a large substrate area for both Ni and Cu substrates. This method is based on inherent electrochemical signals generated by the underlying metal catalysts when fractures or discontinuities of the graphene film are present. The method can be applied immediately after the CVD growth process without the need for any graphene transfer step and represents a powerful quality monitoring technique for the assessment of large-scale fabrication of graphene by the CVD process.The extraordinary properties demonstrated for graphene and graphene-related materials can be fully exploited when a large-scale fabrication procedure is made available. Chemical vapor deposition (CVD) of graphene on Cu and Ni substrates is one of the most promising procedures to synthesize large-area and good quality graphene films. Parallel to the fabrication process, a large-scale quality monitoring technique is equally crucial. We demonstrate here a rapid and simple methodology that is able to probe the effectiveness of the growth process over a large substrate area for both Ni and Cu substrates. This method is based on inherent electrochemical signals generated by the underlying metal catalysts when fractures or discontinuities of the graphene film are present. The method can be applied immediately after the CVD growth process without the need for any graphene transfer step and represents a powerful quality monitoring technique for the assessment of large-scale

  6. Supermassive black holes, large scale structure and holography

    CERN Document Server

    Mongan, T R

    2013-01-01

    A holographic analysis of large scale structure in the universe estimates the mass of supermassive black holes at the center of large scale structures with matter density varying inversely as the square of the distance from their center. The estimate is consistent with two important test cases involving observations of the supermassive black hole with mass 3.6\\times10^{-6} times the galactic mass in Sagittarius A^{*} near the center of our Milky Way and the 2\\times10^{9} solar mass black hole in the quasar ULAS J112001.48+064124.3 at redshift z=7.085. It is also consistent with upper bounds on central black hole masses in globular clusters M15, M19 and M22 developed using the Jansky Very Large Array in New Mexico.

  7. Active power reserves evaluation in large scale PVPPs

    DEFF Research Database (Denmark)

    Crăciun, Bogdan-Ionut; Kerekes, Tamas; Sera, Dezso;

    2013-01-01

    The present trend on investing in renewable ways of producing electricity in the detriment of conventional fossil fuel-based plants will lead to a certain point where these plants have to provide ancillary services and contribute to overall grid stability. Photovoltaic (PV) power has the fastest...... growth among all renewable energies and managed to reach high penetration levels creating instabilities which at the moment are corrected by the conventional generation. This paradigm will change in the future scenarios where most of the power is supplied by large scale renewable plants and parts...... of the ancillary services have to be shared by the renewable plants. The main focus of the proposed paper is to technically and economically analyze the possibility of having active power reserves in large scale PV power plants (PVPPs) without any auxiliary storage equipment. The provided reserves should...

  8. Electron drift in a large scale solid xenon

    CERN Document Server

    Yoo, J

    2015-01-01

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7\\,cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163\\,K), the drift speed is 0.193 $\\pm$ 0.003 cm/$\\mu$s while the drift speed in the solid phase (157\\,K) is 0.397 $\\pm$ 0.006 cm/$\\mu$s at 900 V/cm over 8.0\\,cm of uniform electric fields. Therefore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  9. Bayesian large-scale structure inference and cosmic web analysis

    CERN Document Server

    Leclercq, Florent

    2015-01-01

    Surveys of the cosmic large-scale structure carry opportunities for building and testing cosmological theories about the origin and evolution of the Universe. This endeavor requires appropriate data assimilation tools, for establishing the contact between survey catalogs and models of structure formation. In this thesis, we present an innovative statistical approach for the ab initio simultaneous analysis of the formation history and morphology of the cosmic web: the BORG algorithm infers the primordial density fluctuations and produces physical reconstructions of the dark matter distribution that underlies observed galaxies, by assimilating the survey data into a cosmological structure formation model. The method, based on Bayesian probability theory, provides accurate means of uncertainty quantification. We demonstrate the application of BORG to the Sloan Digital Sky Survey data and describe the primordial and late-time large-scale structure in the observed volume. We show how the approach has led to the fi...

  10. Ultra-large scale cosmology with next-generation experiments

    CERN Document Server

    Alonso, David; Ferreira, Pedro G; Maartens, Roy; Santos, Mario G

    2015-01-01

    Future surveys of large-scale structure will be able to measure perturbations on the scale of the cosmological horizon, and so could potentially probe a number of novel relativistic effects that are negligibly small on sub-horizon scales. These effects leave distinctive signatures in the power spectra of clustering observables and, if measurable, would open a new window on relativistic cosmology. We quantify the size and detectability of the effects for a range of future large-scale structure surveys: spectroscopic and photometric galaxy redshift surveys, intensity mapping surveys of neutral hydrogen, and continuum surveys of radio galaxies. Our forecasts show that next-generation experiments, reaching out to redshifts z ~ 4, will not be able to detect previously-undetected general-relativistic effects from the single-tracer power spectra alone, although they may be able to measure the lensing magnification in the auto-correlation. We also perform a rigorous joint forecast for the detection of primordial non-...

  11. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    -scale versus small-scale farming literature. Chapter 2 examines the underlying causes for the failure of large-scale jatropha plantations on ‘marginal’ land. Chapter 3 compares the productivity of a factory-operated plantation with outgrower-operated plots, while Chapter 4 analyses the effects of a public......-scale land acquisition which has mostly been framed as ‘land grabbing’ throughout developing countries particularly since the mid-2000s. Against this background, outgrower schemes and contract farming are increasingly being promoted to avoid the displacement of smallholder farmers from their land due...... to ‘land grabbing’ for large-scale farming (i.e. outgrower schemes and contract farming could modernise agricultural production while allowing smallholders to maintain their land ownership), to integrate them into global agro-food value chains and to increase their productivity and welfare. However...

  12. The Large Scale Synthesis of Aligned Plate Nanostructures

    Science.gov (United States)

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-07-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.

  13. Series Design of Large-Scale NC Machine Tool

    Institute of Scientific and Technical Information of China (English)

    TANG Zhi

    2007-01-01

    Product system design is a mature concept in western developed countries. It has been applied in war industry during the last century. However, up until now, functional combination is still the main method for product system design in China. Therefore, in terms of a concept of product generation and product interaction we are in a weak position compared with the requirements of global markets. Today, the idea of serial product design has attracted much attention in the design field and the definition of product generation as well as its parameters has already become the standard in serial product designs. Although the design of a large-scale NC machine tool is complicated, it can be further optimized by the precise exercise of object design by placing the concept of platform establishment firmly into serial product design. The essence of a serial product design has been demonstrated by the design process of a large-scale NC machine tool.

  14. LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER

    International Nuclear Information System (INIS)

    By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from ∼10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.

  15. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  16. Critical Analysis of Middleware Architectures for Large Scale Distributed Systems

    CERN Document Server

    Pop, Florin; Costan, Alexandru; Andreica, Mugurel Ionut; Tirsa, Eliana-Dina; Stratan, Corina; Cristea, Valentin

    2009-01-01

    Distributed computing is increasingly being viewed as the next phase of Large Scale Distributed Systems (LSDSs). However, the vision of large scale resource sharing is not yet a reality in many areas - Grid computing is an evolving area of computing, where standards and technology are still being developed to enable this new paradigm. Hence, in this paper we analyze the current development of middleware tools for LSDS, from multiple perspectives: architecture, applications and market research. For each perspective we are interested in relevant technologies used in undergoing projects, existing products or services and useful design issues. In the end, based on this approach, we draw some conclusions regarding the future research directions in this area.

  17. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    CERN Document Server

    Blackman, Eric G

    2014-01-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. H...

  18. Cluster Galaxy Dynamics and the Effects of Large Scale Environment

    CERN Document Server

    White, Martin; Smit, Renske

    2010-01-01

    We use a high-resolution N-body simulation to study how the influence of large-scale structure in and around clusters causes correlated signals in different physical probes and discuss some implications this has for multi-physics probes of clusters. We pay particular attention to velocity dispersions, matching galaxies to subhalos which are explicitly tracked in the simulation. We find that not only do halos persist as subhalos when they fall into a larger host, groups of subhalos retain their identity for long periods within larger host halos. The highly anisotropic nature of infall into massive clusters, and their triaxiality, translates into an anisotropic velocity ellipsoid: line-of-sight galaxy velocity dispersions for any individual halo show large variance depending on viewing angle. The orientation of the velocity ellipsoid is correlated with the large-scale structure, and thus velocity outliers correlate with outliers caused by projection in other probes. We quantify this orientation uncertainty and ...

  19. Distant galaxy clusters in the XMM Large Scale Structure survey

    CERN Document Server

    Willis, J P; Bremer, M N; Pierre, M; Adami, C; Ilbert, O; Maughan, B; Maurogordato, S; Pacaud, F; Valtchanov, I; Chiappetti, L; Thanjavur, K; Gwyn, S; Stanway, E R; Winkworth, C

    2012-01-01

    (Abridged) Distant galaxy clusters provide important tests of the growth of large scale structure in addition to highlighting the process of galaxy evolution in a consistently defined environment at large look back time. We present a sample of 22 distant (z>0.8) galaxy clusters and cluster candidates selected from the 9 deg2 footprint of the overlapping X-ray Multi Mirror (XMM) Large Scale Structure (LSS), CFHTLS Wide and Spitzer SWIRE surveys. Clusters are selected as extended X-ray sources with an accompanying overdensity of galaxies displaying optical to mid-infrared photometry consistent with z>0.8. Nine clusters have confirmed spectroscopic redshifts in the interval 0.80.8 clusters.

  20. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik;

    2013-01-01

    with this imbalance and to reduce its high dependence on oil production. For this reason, it is interesting to analyse the extent to which transport electrification can further the renewable energy integration. This paper quantifies this issue in Inner Mongolia, where the share of wind power in the electricity supply...... was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power......Renewable energy is one of the possible solutions when addressing climate change. Today, large-scale renewable energy integration needs to include the experience to balance the discrepancy between electricity demand and supply. The electrification of transportation may have the potential to deal...

  1. Algorithmic and Statistical Perspectives on Large-Scale Data Analysis

    CERN Document Server

    Mahoney, Michael W

    2010-01-01

    In recent years, ideas from statistics and scientific computing have begun to interact in increasingly sophisticated and fruitful ways with ideas from computer science and the theory of algorithms to aid in the development of improved worst-case algorithms that are useful for large-scale scientific and Internet data analysis problems. In this chapter, I will describe two recent examples---one having to do with selecting good columns or features from a (DNA Single Nucleotide Polymorphism) data matrix, and the other having to do with selecting good clusters or communities from a data graph (representing a social or information network)---that drew on ideas from both areas and that may serve as a model for exploiting complementary algorithmic and statistical perspectives in order to solve applied large-scale data analysis problems.

  2. Constraining cosmological ultra-large scale structure using numerical relativity

    CERN Document Server

    Braden, Jonathan; Peiris, Hiranya V; Aguirre, Anthony

    2016-01-01

    Cosmic inflation, a period of accelerated expansion in the early universe, can give rise to large amplitude ultra-large scale inhomogeneities on distance scales comparable to or larger than the observable universe. The cosmic microwave background (CMB) anisotropy on the largest angular scales is sensitive to such inhomogeneities and can be used to constrain the presence of ultra-large scale structure (ULSS). We numerically evolve nonlinear inhomogeneities present at the beginning of inflation in full General Relativity to assess the CMB quadrupole constraint on the amplitude of the initial fluctuations and the size of the observable universe relative to a length scale characterizing the ULSS. To obtain a statistically significant number of simulations, we adopt a toy model in which inhomogeneities are injected along a preferred direction. We compute the likelihood function for the CMB quadrupole including both ULSS and the standard quantum fluctuations produced during inflation. We compute the posterior given...

  3. Individual skill differences and large-scale environmental learning.

    Science.gov (United States)

    Fields, Alexa W; Shelton, Amy L

    2006-05-01

    Spatial skills are known to vary widely among normal individuals. This project was designed to address whether these individual differences are differentially related to large-scale environmental learning from route (ground-level) and survey (aerial) perspectives. Participants learned two virtual environments (route and survey) with limited exposure and tested on judgments about relative locations of objects. They also performed a series of spatial and nonspatial component skill tests. With limited learning, performance after route encoding was worse than performance after survey encoding. Furthermore, performance after route and survey encoding appeared to be preferentially linked to perspective and object-based transformations, respectively. Together, the results provide clues to how different skills might be engaged by different individuals for the same goal of learning a large-scale environment. PMID:16719662

  4. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  5. Imprint of thawing scalar fields on large scale galaxy overdensity

    CERN Document Server

    Dinda, Bikash R

    2016-01-01

    We calculate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. As we need to consider the fluctuations in scalar field on these large scales, the general relativistic corrections in thawing scalar field models are distinctly different from $\\Lambda$CDM and the difference can be upto $15-20\\%$ at some scales. Also there is an interpolation between suppression and enhancement of power in scalar field models compared to the $\\Lambda$CDM model on smaller scales and this happens in a specific redshift range that is quite robust to the form of the scalar field potentials or the choice of different cosmological parameters. This can be useful to distinguish scalar field models from $\\Lambda$CDM with future optical/radio surveys.

  6. Angular averaged consistency relations of large-scale structures

    CERN Document Server

    Valageas, Patrick

    2013-01-01

    The cosmological dynamics of gravitational clustering satisfies an approximate invariance with respect to the cosmological parameters that is often used to simplify analytical computations. We describe how this approximate symmetry gives rise to angular averaged consistency relations for the matter density correlations. This allows one to write the $(\\ell+n)$ density correlation, with $\\ell$ large-scale linear wave numbers that are integrated over angles, and $n$ fixed small-scale nonlinear wave numbers, in terms of the small-scale $n$-point density correlation and $\\ell$ prefactors that involve the linear power spectra at the large-scale wave numbers. These relations, which do not vanish for equal-time statistics, go beyond the already known kinematic consistency relations. They could be used to detect primordial non-Gaussianities, modifications of gravity, limitations of galaxy biasing schemes, or to help designing analytical models of gravitational clustering.

  7. Large-scale conditions of Tibet Plateau vortex departure

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Based on the circumfluence situation of the out- and in-Tibet Plateau Vortex (TPV) from 1998-2004 and its weather-influencing system,multiple synthesized physical fields in the middle-upper troposphere of the out- and in-TPV are computationally analyzed by using re-analysis data from National Centers for Environmental Prediction and National Center for Atmospheric Research (NCEP/NCAR) of United States.Our research shows that the departure of TPV is caused by the mutual effects among the weather systems in Westerlies and in the subtropical area,within the middle and the upper troposphere.This paper describes the large-scale meteorological condition and the physics image of the departure of TPV,and the main differences among the large-scale conditions for all types of TPVs.This study could be used as the scientific basis for predicting the torrential rain and the floods caused by the TPV departure.

  8. Dark Energy from Large-Scale Structure Lensing Information

    CERN Document Server

    Lu, Tingting; Doré, Olivier

    2009-01-01

    Wide area Large-Scale Structure (LSS) surveys are planning to map a substantial fraction of the visible universe to quantify dark energy through Baryon Acoustic Oscillations (BAO). At increasing redshift, for example that probed by proposed 21-cm intensity mapping surveys, gravitational lensing potentially limits the fidelity (Hui et al., 2007) because it distorts the apparent matter distribution. In this paper we show that these distortions can be reconstructed, and actually used to map the distribution of intervening dark matter. The lensing information for sources at z=1-3 allows accurate reconstruction of the gravitational potential on large scales, l <~ 100, which is well matched for Integrated Sachs-Wolfe (ISW) effect measurements of dark energy and its sound speed, and a strong constraint for modified gravity models of dark energy. We built an optimal quadratic lensing estimator for non-Gaussian sources, which is necessary for LSS. The phenomenon of "information saturation" (Rimes & Hamilton, 20...

  9. Generation Expansion Planning Considering Integrating Large-scale Wind Generation

    DEFF Research Database (Denmark)

    Zhang, Chunyu; Ding, Yi; Østergaard, Jacob;

    2013-01-01

    necessitated the inclusion of more innovative and sophisticated approaches in power system investment planning. A bi-level generation expansion planning approach considering large-scale wind generation was proposed in this paper. The first phase is investment decision, while the second phase is production......Generation expansion planning (GEP) is the problem of finding the optimal strategy to plan the Construction of new generation while satisfying technical and economical constraints. In the deregulated and competitive environment, large-scale integration of wind generation (WG) in power system has...... optimization decision. A multi-objective PSO (MOPSO) algorithm was introduced to solve this optimization problem, which can accelerate the convergence and guarantee the diversity of Pareto-optimal front set as well. The feasibility and effectiveness of the proposed bi-level planning approach and the MOPSO...

  10. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    allows usage of such classifiers in large scale problems. We demonstrate its application for segmenting tibial articular cartilage in knee MRI scans, with number of training voxels being more than 2 million. In the next phase of the study we apply the cascaded classifier to a similar but even more......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... image, respectively and this system is referred as triplanar convolutional neural network in the thesis. We applied the triplanar CNN for segmenting articular cartilage in knee MRI and compared its performance with the same state-of-the-art method which was used as a benchmark for cascaded classifier...

  11. Large-scale flow generation by inhomogeneous helicity

    CERN Document Server

    Yokoi, Nobumitsu

    2015-01-01

    The effect of kinetic helicity (velocity--vorticity correlation) on turbulent momentum transport is investigated. The turbulent kinetic helicity (pseudoscalar) enters into the Reynolds stress (mirrorsymmetric tensor) expression in the form of a helicity gradient as the coupling coefficient for the mean vorticity and/or the angular velocity (axial vector), which suggests the possibility of mean-flow generation in the presence of inhomogeneous helicity. This inhomogeneous helicity effect, which was previously confirmed at the level of a turbulence- or closure-model simulation, is examined with the aid of direct numerical simulations of rotating turbulence with non-uniform helicity sustained by an external forcing. The numerical simulations show that the spatial distribution of the Reynolds stress is in agreement with the helicity-related term coupled with the angular velocity, and that a large-scale flow is generated in the direction of angular velocity. Such a large-scale flow is not induced in the case of hom...

  12. Quantum noise in large-scale coherent nonlinear photonic circuits

    CERN Document Server

    Santori, Charles; Beausoleil, Raymond G; Tezak, Nikolas; Hamerly, Ryan; Mabuchi, Hideo

    2014-01-01

    A semiclassical simulation approach is presented for studying quantum noise in large-scale photonic circuits incorporating an ideal Kerr nonlinearity. A netlist-based circuit solver is used to generate matrices defining a set of stochastic differential equations, in which the resonator field variables represent random samplings of the Wigner quasi-probability distributions. Although the semiclassical approach involves making a large-photon-number approximation, tests on one- and two-resonator circuits indicate satisfactory agreement between the semiclassical and full-quantum simulation results in the parameter regime of interest. The semiclassical model is used to simulate random errors in a large-scale circuit that contains 88 resonators and hundreds of components in total, and functions as a 4-bit ripple counter. The error rate as a function of on-state photon number is examined, and it is observed that the quantum fluctuation amplitudes do not increase as signals propagate through the circuit, an important...

  13. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... on avoiding redundancy for users working on the same task. While this improves the effectiveness of the user work process, the underlying query processing engine is typically considered a "black box" and left unchanged. Research in multiple query processing, on the other hand, ignores the application...... to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems....

  14. Optimal algorithms for scheduling large scale application on heterogeneous systems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper studies optimal algorithms for scheduling large-scale application on heterogeneous systems using Divis ible Load Theory.A more realistic and general model,i.e.,both processors and communication links may have different speeds and arbitrary start-up costs,and communication is in non-blocking mode,is introduced.Under such environment, the following results are obtained:①Mathematic model and closed-form expressions both for the processing time and the fraction of load for each processor are derived;②the influence of start-up costs on the optimal processing time is analyzed;③for a given heterogeneous systems and a large-scale computing problem,optimal algorithms are proposed.

  15. Optimization of Survivability Analysis for Large-Scale Engineering Networks

    CERN Document Server

    Poroseva, S V

    2012-01-01

    Engineering networks fall into the category of large-scale networks with heterogeneous nodes such as sources and sinks. The survivability analysis of such networks requires the analysis of the connectivity of the network components for every possible combination of faults to determine a network response to each combination of faults. From the computational complexity point of view, the problem belongs to the class of exponential time problems at least. Partially, the problem complexity can be reduced by mapping the initial topology of a complex large-scale network with multiple sources and multiple sinks onto a set of smaller sub-topologies with multiple sources and a single sink connected to the network of sources by a single link. In this paper, the mapping procedure is applied to the Florida power grid.

  16. Severe heat waves in Southern Australia: synoptic climatology and large scale connections

    Science.gov (United States)

    Pezza, Alexandre Bernardes; van Rensch, Peter; Cai, Wenju

    2012-01-01

    This paper brings a new perspective on the large scale dynamics of severe heat wave (HW) events that commonly affect southern Australia. Through an automatic tracking scheme, the cyclones and anticyclones associated with HWs affecting Melbourne, Adelaide and Perth are tracked at both the surface and upper levels, producing for the first time a synoptic climatology that reveals the broader connections associated with these extreme phenomena. The results show that a couplet (or pressure dipole) formed by transient cyclones and anticyclones can reinforce the HW similarly to what is observed in cold surges (CS), with an obvious opposite polarity. Our results show that there is a large degree of mobility in the synoptic signature associated with the passage of the upper level ridges before they reach Australia and the blocking is established, with HW-associated surface anticyclones often initiating over the west Indian Ocean and decaying in the eastern Pacific. In contrast to this result the 500 hPa anticyclone tracks show a very small degree of mobility, responding to the dominance of the upper level blocking ridge. An important feature of HWs is that most of the cyclones are formed inland in association with heat troughs, while in CS the cyclones are typically maritime (often explosive), associated with a strong cold front. Hence the influence of the cyclone is indirect, contributing to reinforce the blocking ridge through hot and dry advection on the ridge's western flank. Additional insights are drawn for the record Adelaide case of March 2008 with fifteen consecutive days above 35°C breaking the previous record by 7 days. Sea surface temperatures suggest a significant air-sea interaction mechanism, with a broad increase in the meridional temperature gradient over the Indian Ocean amplifying the upstream Rossby waves that can trigger HW events. A robust cooling of the waters close to the Australian coast also contributes to the maintenance of the blocking highs

  17. A thermal energy storage process for large scale electric applications

    OpenAIRE

    Desrues, T; Ruer, J; Marty, P.; Fourmigué, JF

    2009-01-01

    Abstract A new type of thermal energy storage process for large scale electric applications is presented, based on a high temperature heat pump cycle which transforms electrical energy into thermal energy and stores it inside two large regenerators, followed by a thermal engine cycle which transforms the stored thermal energy back into electrical energy. The storage principle is described, and its thermodynamic cycle is analyzed, leading to the theoretical efficiency of the storage...

  18. Large scale cross-drive correlation of digital media

    OpenAIRE

    Bruaene, Joseph Van

    2016-01-01

    Approved for public release; distribution is unlimited Traditional digital forensic practices have focused on individual hard disk analysis. As the digital universe continues to grow, and cyber crimes become more prevalent, the ability to make large scale cross-drive correlations among a large corpus of digital media becomes increasingly important. We propose a methodology that builds on bulk-analysis techniques to avoid operating system- and file-system specific parsing. In addition, we a...

  19. GroFi: Large-scale fiber placement research facility

    OpenAIRE

    Krombholz, Christian; Kruse, Felix; Wiedemann, Martin

    2016-01-01

    GroFi is a large research facility operated by the German Aerospace Center’s Center for Lightweight-Production-Technology in Stade. A combination of different layup technologies namely (dry) fiber placement and tape laying, allows the development and validation of new production technologies and processes for large-scale composite components. Due to the use of coordinated and simultaneously working layup units a high flexibility of the research platform is achieved. This allows the investiga...

  20. Network of Experts for Large-Scale Image Categorization

    OpenAIRE

    Ahmed, Karim; Baig, Mohammad Haris; Torresani, Lorenzo

    2016-01-01

    We present a tree-structured network architecture for large-scale image classification. The trunk of the network contains convolutional layers optimized over all classes. At a given depth, the trunk splits into separate branches, each dedicated to discriminate a different subset of classes. Each branch acts as an expert classifying a set of categories that are difficult to tell apart, while the trunk provides common knowledge to all experts in the form of shared features. The training of our ...

  1. The Large-Scale Sugarcane Stripper with Automatic Feeding

    OpenAIRE

    Jiaxiang Lin; Wenjie Yan; Jiaping Lin

    2012-01-01

    This study mainly introduce the large-scale sugarcane stripper with automatic feeding, which including the automatic feeding module, cleaning leaves module, collecting module and control module. The machine is an important part of the segmental type sugarcane harvester, using to solve the highest labor intensity problem of cleaning leaves. Collecting the hilly areas sugarcane and cleaning their leaves, can greatly improve the labor productivity and changing the current mode of sugarcane harvest.

  2. Split Architecture for Large Scale Wide Area Networks

    OpenAIRE

    John, Wolfgang; Devlic, Alisa; Ding, Zhemin; Jocha, David; Kern, Andras; Kind, Mario; Köpsel, Andreas; Nordell, Viktor; Sharma, Sachin; Sköldström, Pontus; Staessens, Dimitri; Takacs, Attila; Topp, Steffen; Westphal, F. -Joachim; Woesner, Hagen

    2014-01-01

    This report defines a carrier-grade split architecture based on requirements identified during the SPARC project. It presents the SplitArchitecture proposal, the SPARC concept for Software Defined Networking (SDN) introduced for large-scale wide area networks such as access/aggregation networks, and evaluates technical issues against architectural trade-offs. First we present the control and management architecture of the proposed SplitArchitecture. Here, we discuss a recursive control archit...

  3. Multivariate Clustering of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Critchlow, T

    2003-06-13

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatio-temporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial dimensions is important since ''similar'' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building clusters, it is desirable to associate each cluster with its correct spatial region. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  4. Multivariate Clustering of Large-Scale Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Critchlow, T

    2003-03-04

    Simulations of complex scientific phenomena involve the execution of massively parallel computer programs. These simulation programs generate large-scale data sets over the spatiotemporal space. Modeling such massive data sets is an essential step in helping scientists discover new information from their computer simulations. In this paper, we present a simple but effective multivariate clustering algorithm for large-scale scientific simulation data sets. Our algorithm utilizes the cosine similarity measure to cluster the field variables in a data set. Field variables include all variables except the spatial (x, y, z) and temporal (time) variables. The exclusion of the spatial space is important since 'similar' characteristics could be located (spatially) far from each other. To scale our multivariate clustering algorithm for large-scale data sets, we take advantage of the geometrical properties of the cosine similarity measure. This allows us to reduce the modeling time from O(n{sup 2}) to O(n x g(f(u))), where n is the number of data points, f(u) is a function of the user-defined clustering threshold, and g(f(u)) is the number of data points satisfying the threshold f(u). We show that on average g(f(u)) is much less than n. Finally, even though spatial variables do not play a role in building a cluster, it is desirable to associate each cluster with its correct spatial space. To achieve this, we present a linking algorithm for connecting each cluster to the appropriate nodes of the data set's topology tree (where the spatial information of the data set is stored). Our experimental evaluations on two large-scale simulation data sets illustrate the value of our multivariate clustering and linking algorithms.

  5. Large-Scale Post-Crisis Corporate Sector Restructuring

    OpenAIRE

    Mark R. Stone

    2000-01-01

    This paper summarizes the objectives, tasks, and modalities of large-scale, post-crisis corporate restructuring based on nine recent episodes with a view to organizing the policy choices and drawing some general conclusions. These episodes suggest that government-led restructuring efforts should integrate corporate and bank restructuring in a holistic and transparent strategy based on clearly defined objective and including sunset provisions.

  6. Learning Compact Visual Attributes for Large-Scale Image Classification

    OpenAIRE

    Su, Yu; Jurie, Frédéric

    2012-01-01

    International audience Attributes based image classification has received a lot of attention recently, as an interesting tool to share knowledge across different categories or to produce compact signature of images. However, when high classification performance is expected, state-of-the-art results are typically obtained by combining Fisher Vectors (FV) and Spatial Pyramid Matching (SPM), leading to image signatures with dimensionality up to 262,144 [1]. This is a hindrance to large-scale ...

  7. Punishment sustains large-scale cooperation in prestate warfare

    OpenAIRE

    Mathew, Sarah; Boyd, Robert

    2011-01-01

    Understanding cooperation and punishment in small-scale societies is crucial for explaining the origins of human cooperation. We studied warfare among the Turkana, a politically uncentralized, egalitarian, nomadic pastoral society in East Africa. Based on a representative sample of 88 recent raids, we show that the Turkana sustain costly cooperation in combat at a remarkably large scale, at least in part, through punishment of free-riders. Raiding parties comprised several hundred warriors an...

  8. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  9. Large scale ocean models beyond the traditional approximation

    OpenAIRE

    Lucas, Carine; Mcwilliams, Jim; Rousseau, Antoine

    2016-01-01

    International audience This works corresponds to classes given by A. Rousseau in February 2014 in Toulouse, in the framework of the CIMI labex. The objective is to describe and question the models that are traditionaly used for large scale oceanography, whether in 2D or 3D. Starting from fundamental equations (mass and momentum conservation), it is explained how-thanks to approximations for which we provide justifications-one can build simpler models that allow a realistic numerical implem...

  10. Large-scale Alfvén vortices

    Energy Technology Data Exchange (ETDEWEB)

    Onishchenko, O. G., E-mail: onish@ifz.ru [Institute of Physics of the Earth, 10 B. Gruzinskaya, 123242 Moscow, Russian Federation and Space Research Institute, 84/32 Profsouznaya str., 117997 Moscow (Russian Federation); Pokhotelov, O. A., E-mail: pokh@ifz.ru [Institute of Physics of the Earth, 10 B. Gruzinskaya, 123242 Moscow (Russian Federation); Horton, W., E-mail: wendell.horton@gmail.com [Institute for Fusion Studies and Applied Research Laboratory, University of Texas at Austin, Austin, Texas 78713 (United States); Scullion, E., E-mail: scullie@tcd.ie [School of Physics, Trinity College Dublin, Dublin 2 (Ireland); Fedun, V., E-mail: v.fedun@sheffield.ac.uk [Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield S13JD (United Kingdom)

    2015-12-15

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  11. Large-Scale Cortical Dynamics of Sleep Slow Waves

    OpenAIRE

    Botella-Soler, Vicente; Valderrama, Mario; Crépon, Benoît; Navarro, Vincent; Le Van Quyen, Michel

    2012-01-01

    Slow waves constitute the main signature of sleep in the electroencephalogram (EEG). They reflect alternating periods of neuronal hyperpolarization and depolarization in cortical networks. While recent findings have demonstrated their functional role in shaping and strengthening neuronal networks, a large-scale characterization of these two processes remains elusive in the human brain. In this study, by using simultaneous scalp EEG and intracranial recordings in 10 epileptic subjects, we exam...

  12. A Large-Scale Study of Online Shopping Behavior

    OpenAIRE

    Nalchigar, Soroosh; Weber, Ingmar

    2012-01-01

    The continuous growth of electronic commerce has stimulated great interest in studying online consumer behavior. Given the significant growth in online shopping, better understanding of customers allows better marketing strategies to be designed. While studies of online shopping attitude are widespread in the literature, studies of browsing habits differences in relation to online shopping are scarce. This research performs a large scale study of the relationship between Internet browsing hab...

  13. Unsupervised Deep Hashing for Large-scale Visual Search

    OpenAIRE

    Xia, Zhaoqiang; Feng, Xiaoyi; Peng, Jinye; Hadid, Abdenour

    2016-01-01

    Learning based hashing plays a pivotal role in large-scale visual search. However, most existing hashing algorithms tend to learn shallow models that do not seek representative binary codes. In this paper, we propose a novel hashing approach based on unsupervised deep learning to hierarchically transform features into hash codes. Within the heterogeneous deep hashing framework, the autoencoder layers with specific constraints are considered to model the nonlinear mapping between features and ...

  14. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems

    OpenAIRE

    Demchak, Barry; Krüger, Ingolf

    2012-01-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection.

  15. Experimental simulation of microinteractions in large scale explosions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety

    1998-01-01

    This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)

  16. Measuring Large-Scale Social Networks with High Resolution

    DEFF Research Database (Denmark)

    Stopczynski, Arkadiusz; Sekara, Vedran; Sapiezynski, Piotr;

    2014-01-01

    This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions....... The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection....

  17. Foundations of Large-Scale Multimedia Information Management and Retrieval

    CERN Document Server

    Chang, Edward Y

    2011-01-01

    "Foundations of Large-Scale Multimedia Information Management and Retrieval - Mathematics of Perception" covers knowledge representation and semantic analysis of multimedia data and scalability in signal extraction, data mining, and indexing. The book is divided into two parts: Part I - Knowledge Representation and Semantic Analysis focuses on the key components of mathematics of perception as it applies to data management and retrieval. These include feature selection/reduction, knowledge representation, semantic analysis, distance function formulation for measuring similarity, and

  18. Turbulent large-scale structure effects on wake meandering

    Science.gov (United States)

    Muller, Y.-A.; Masson, C.; Aubrun, S.

    2015-06-01

    This work studies effects of large-scale turbulent structures on wake meandering using Large Eddy Simulations (LES) over an actuator disk. Other potential source of wake meandering such as the instablility mechanisms associated with tip vortices are not treated in this study. A crucial element of the efficient, pragmatic and successful simulations of large-scale turbulent structures in Atmospheric Boundary Layer (ABL) is the generation of the stochastic turbulent atmospheric flow. This is an essential capability since one source of wake meandering is these large - larger than the turbine diameter - turbulent structures. The unsteady wind turbine wake in ABL is simulated using a combination of LES and actuator disk approaches. In order to dedicate the large majority of the available computing power in the wake, the ABL ground region of the flow is not part of the computational domain. Instead, mixed Dirichlet/Neumann boundary conditions are applied at all the computational surfaces except at the outlet. Prescribed values for Dirichlet contribution of these boundary conditions are provided by a stochastic turbulent wind generator. This allows to simulate large-scale turbulent structures - larger than the computational domain - leading to an efficient simulation technique of wake meandering. Since the stochastic wind generator includes shear, the turbulence production is included in the analysis without the necessity of resolving the flow near the ground. The classical Smagorinsky sub-grid model is used. The resulting numerical methodology has been implemented in OpenFOAM. Comparisons with experimental measurements in porous-disk wakes have been undertaken, and the agreements are good. While temporal resolution in experimental measurements is high, the spatial resolution is often too low. LES numerical results provide a more complete spatial description of the flow. They tend to demonstrate that inflow low frequency content - or large- scale turbulent structures - is

  19. Combining p-values in large scale genomics experiments

    OpenAIRE

    Dmitri V Zaykin; Zhivotovsky, Lev A.; Czika, Wendy; Shao, Susan; Wolfinger, Russell D.

    2007-01-01

    In large-scale genomics experiments involving thousands of statistical tests, such as association scans and microarray expression experiments, a key question is: Which of the L tests represent true associations (TAs)? The traditional way to control false findings is via individual adjustments. In the presence of multiple TAs, p-value combination methods offer certain advantages. Both Fisher’s and Lancaster’s combination methods use an inverse gamma transformation. We identify the relation of ...

  20. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    Science.gov (United States)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  1. Design study on sodium-cooled large-scale reactor

    International Nuclear Information System (INIS)

    In Phase 1 of the 'Feasibility Study on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2 of the F/S, it is planed to precede a preliminary conceptual design of a sodium-cooled large-scale reactor based on the design of the advanced loop type reactor. Through the design study, it is intended to construct such a plant concept that can show its attraction and competitiveness as a commercialized reactor. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2001, which is the first year of Phase 2. In the JFY2001 design study, a plant concept has been constructed based on the design of the advanced loop type reactor, and fundamental specifications of main systems and components have been set. Furthermore, critical subjects related to safety, structural integrity, thermal hydraulics, operability, maintainability and economy have been examined and evaluated. As a result of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  2. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  3. Large Scale Relationship between Aquatic Insect Traits and Climate

    OpenAIRE

    Bhowmik, Avit Kumar; Schäfer, Ralf B.

    2015-01-01

    Climate is the predominant environmental driver of freshwater assemblage pattern on large spatial scales, and traits of freshwater organisms have shown considerable potential to identify impacts of climate change. Although several studies suggest traits that may indicate vulnerability to climate change, the empirical relationship between freshwater assemblage trait composition and climate has been rarely examined on large scales. We compared the responses of the assumed climate-associated tra...

  4. A Cloud Computing Platform for Large-Scale Forensic Computing

    Science.gov (United States)

    Roussev, Vassil; Wang, Liqiang; Richard, Golden; Marziale, Lodovico

    The timely processing of massive digital forensic collections demands the use of large-scale distributed computing resources and the flexibility to customize the processing performed on the collections. This paper describes MPI MapReduce (MMR), an open implementation of the MapReduce processing model that outperforms traditional forensic computing techniques. MMR provides linear scaling for CPU-intensive processing and super-linear scaling for indexing-related workloads.

  5. Petascale computations for Large-scale Atomic and Molecular collisions

    OpenAIRE

    McLaughlin, Brendan M.; Ballance, Connor P.

    2014-01-01

    Petaflop architectures are currently being utilized efficiently to perform large scale computations in Atomic, Molecular and Optical Collisions. We solve the Schroedinger or Dirac equation for the appropriate collision problem using the R-matrix or R-matrix with pseudo-states approach. We briefly outline the parallel methodology used and implemented for the current suite of Breit-Pauli and DARC codes. Various examples are shown of our theoretical results compared with those obtained from Sync...

  6. Fast transient stability simulation of large scale power systems

    OpenAIRE

    Kumar, Sreerama R; Ramanujam, R.; Khincha, HP; Jenkins, L

    1992-01-01

    This paper describes a computationally efficient algorithm for transient stability simulation of large scale power system dynamics. The simultaneous implicit approach proposed by H.V. Dommel and N. Sato [l] has become the state-of-the –arc technique for production grade transient stability simulation programs. This paper proposes certain modifications to the Dommel-Sato method with which significant improvement in computational efficiency could be achieved. Preliminary investigations on a sta...

  7. Large Scale Synthesis of Carbon Nanofibres on Sodium Chloride Support

    OpenAIRE

    Ravindra Rajarao; Badekai Ramachandra Bhat

    2012-01-01

    Large scale synthesis of carbon nanofibres (CNFs) on a sodium chloride support has been achieved. CNFs have been synthesized using metal oxalate (Ni, Co and Fe) as catalyst precursors at 680 C by chemical vapour deposition method. Upon pyrolysis, this catalyst precursors yield catalyst nanoparticles directly. The sodium chloride was used as a catalyst support, it was chosen because of its non‐toxic and water soluble nature. Problems, such as the detrimental effect of CNFs, the detrimental ef...

  8. Topic modeling for large-scale text data

    Institute of Scientific and Technical Information of China (English)

    Xi-ming LI; Ji-hong OUYANG; You LU

    2015-01-01

    This paper develops a novel online algorithm, namely moving average stochastic variational inference (MASVI), which applies the results obtained by previous iterations to smooth out noisy natural gradients. We analyze the convergence property of the proposed algorithm and conduct a set of experiments on two large-scale collections that contain millions of documents. Experimental results indicate that in contrast to algorithms named ‘stochastic variational inference’ and‘SGRLD’, our algorithm achieves a faster convergence rate and better performance.

  9. HECTR analyses of large-scale premixed hydrogen combustion experiments

    International Nuclear Information System (INIS)

    The HECTR (Hydrogen Event: Containment Transient Response) computer code is a reactor accident analysis tool designed to calculate the transport and combustion of hydrogen and the transient response of the containment. As part of the assessment effort, HECTR has been used to analyze the Nevada Test Site (NTS) large-scale premixed hydrogen combustion experiments. The results of these analyses and the critical review of the combustion model in HECTR is presented in this paper

  10. Large scale optimization algorithms : applications to solution of inverse problems

    OpenAIRE

    Repetti, Audrey

    2015-01-01

    An efficient approach for solving an inverse problem is to define the recovered signal/image as a minimizer of a penalized criterion which is often split in a sum of simpler functions composed with linear operators. In the situations of practical interest, these functions may be neither convex nor smooth. In addition, large scale optimization problems often have to be faced. This thesis is devoted to the design of new methods to solve such difficult minimization problems, while paying attenti...

  11. Large-scale control of mosquito vectors of disease

    International Nuclear Information System (INIS)

    By far the most important vector borne disease is malaria transmitted by Anopheles mosquitoes causing an estimated 300-500 million clinical cases per year and 1.4-2.6 million deaths, mostly in tropical Africa (WHO 1995). The second most important mosquito borne disease is lymphatic filariasis, but there are now such effective, convenient and cheap drugs for its treatment that vector control will now have at most a supplementary role (Maxwell et al. 1999a). The only other mosquito borne disease likely to justify large-scale vector control is dengue which is carried in urban areas of Southeast Asia and Latin America by Aedes aegypti L. which was also the urban vector of yellow fever in Latin America. This mosquito was eradicated from most countries of Latin America between the 1930s and 60s but, unfortunately in recent years, it has been allowed to re-infest and cause serious dengue epidemics, except in Cuba where it has been held close to eradication (Reiter and Gubler 1997). In the 1930s and 40s, invasions by An. gambiae Giles s.l., the main tropical African malaria vector, were eradicated from Brazil (Soper and Wilson 1943) and Egypt (Shousha 1947). It is surprising that greatly increased air traffic has not led to more such invasions of apparently climatically suitable areas, e.g., of Polynesia which has no anophelines and therefore no malaria. The above mentioned temporary or permanent eradications were achieved before the advent of DDT, using larvicidal methods (of a kind which would now be considered environmentally unacceptable) carried out by rigorously disciplined teams. MALARIA Between the end of the Second World War and the 1960s, the availability of DDT for spraying of houses allowed eradication of malaria from the Soviet Union, southern Europe, the USA, northern Venezuela and Guyana, Taiwan and the Caribbean Islands, apart from Hispaniola. Its range and intensity were also greatly reduced in China, India and South Africa and, at least temporarily, in

  12. Impact of Large-scale Geological Architectures On Recharge

    Science.gov (United States)

    Troldborg, L.; Refsgaard, J. C.; Engesgaard, P.; Jensen, K. H.

    Geological and hydrogeological data constitutes the basis for assessment of ground- water flow pattern and recharge zones. The accessibility and applicability of hard ge- ological data is often a major obstacle in deriving plausible conceptual models. Nev- ertheless focus is often on parameter uncertainty caused by the effect of geological heterogeneity due to lack of hard geological data, thus neglecting the possibility of alternative conceptualizations of the large-scale geological architecture. For a catchment in the eastern part of Denmark we have constructed different geologi- cal models based on different conceptualization of the major geological trends and fa- cies architecture. The geological models are equally plausible in a conceptually sense and they are all calibrated to well head and river flow measurements. Comparison of differences in recharge zones and subsequently well protection zones emphasize the importance of assessing large-scale geological architecture in hydrological modeling on regional scale in a non-deterministic way. Geostatistical modeling carried out in a transitional probability framework shows the possibility of assessing multiple re- alizations of large-scale geological architecture from a combination of soft and hard geological information.

  13. A visualization framework for large-scale virtual astronomy

    Science.gov (United States)

    Fu, Chi-Wing

    Motivated by advances in modern positional astronomy, this research attempts to digitally model the entire Universe through computer graphics technology. Our first challenge is space itself. The gigantic size of the Universe makes it impossible to put everything into a typical graphics system at its own scale. The graphics rendering process can easily fail because of limited computational precision, The second challenge is that the enormous amount of data could slow down the graphics; we need clever techniques to speed up the rendering. Third, since the Universe is dominated by empty space, objects are widely separated; this makes navigation difficult. We attempt to tackle these problems through various techniques designed to extend and optimize the conventional graphics framework, including the following: power homogeneous coordinates for large-scale spatial representations, generalized large-scale spatial transformations, and rendering acceleration via environment caching and object disappearance criteria. Moreover, we implemented an assortment of techniques for modeling and rendering a variety of astronomical bodies, ranging from the Earth up to faraway galaxies, and attempted to visualize cosmological time; a method we call the Lightcone representation was introduced to visualize the whole space-time of the Universe at a single glance. In addition, several navigation models were developed to handle the large-scale navigation problem. Our final results include a collection of visualization tools, two educational animations appropriate for planetarium audiences, and state-of-the-art-advancing rendering techniques that can be transferred to practice in digital planetarium systems.

  14. Large-scale flow experiments for managing river systems

    Science.gov (United States)

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  15. Systematic Literature Review of Agile Scalability for Large Scale Projects

    Directory of Open Access Journals (Sweden)

    Hina saeeda

    2015-09-01

    Full Text Available In new methods, “agile” has come out as the top approach in software industry for the development of the soft wares. With different shapes agile is applied for handling the issues such as low cost, tight time to market schedule continuously changing requirements, Communication & Coordination, team size and distributed environment. Agile has proved to be successful in the small and medium size project, however, it have several limitations when applied on large size projects. The purpose of this study is to know agile techniques in detail, finding and highlighting its restrictions for large size projects with the help of systematic literature review. The systematic literature review is going to find answers for the Research questions: 1 How to make agile approaches scalable and adoptable for large projects?2 What are the existing methods, approaches, frameworks and practices support agile process in large scale projects? 3 What are limitations of existing agile approaches, methods, frameworks and practices with reference to large scale projects? This study will identify the current research problems of the agile scalability for large size projects by giving a detail literature review of the identified problems, existed work for providing solution to these problems and will find out limitations of the existing work for covering the identified problems in the agile scalability. All the results gathered will be summarized statistically based on these finding remedial work will be planned in future for handling the identified limitations of agile approaches for large scale projects.

  16. A Model of Plasma Heating by Large-Scale Flow

    CERN Document Server

    Pongkitiwanichakul, P; Boldyrev, S; Mason, J; Perez, J C

    2015-01-01

    In this work we study the process of energy dissipation triggered by a slow large scale motion of a magnetized conducting fluid. Our consideration is motivated by the problem of heating the solar corona, which is believed to be governed by fast reconnection events set off by the slow motion of magnetic field lines anchored in the photospheric plasma. To elucidate the physics governing the disruption of the imposed laminar motion and the energy transfer to small scales, we propose a simplified model where the large-scale motion of magnetic field lines is prescribed not at the footpoints but rather imposed volumetrically. As a result, the problem can be treated numerically with an efficient, highly-accurate spectral method, allowing us to use a resolution and statistical ensemble exceeding those of the previous work. We find that, even though the large-scale deformations are slow, they eventually lead to reconnection events that drive a turbulent state at smaller scales. The small-scale turbulence displays many...

  17. Topology of large scale structure as test of modified gravity

    CERN Document Server

    Wang, Xin; Park, Changbom

    2010-01-01

    The genus of the iso-density contours is a robust measure of the topology of large-scale structure, and relatively insensitive to galaxies biasing and redshift-space distortions. We show that the growth of density fluctuations is scale-dependent even in the linear regime in some modified gravity theories, which opens a possibility of testing the theories observationally. We propose to use the genus of the iso-density contours, an intrinsic measure of the topology of large-scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity density fluctuations are growing at the same rate on all scales in the linear regime and the topology of large-scale structure is conserved in time in comoving space because structures are growing homologously. In this theory we expect the genus-smoothing scale relation is time-independent. However, in modified gravity models where structures grow with different rates on different scales, the genus-smoothing scale relation should change in ti...

  18. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances.

    Directory of Open Access Journals (Sweden)

    V Thomas Parker

    Full Text Available Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host.

  19. Large scale structure around a z=2.1 cluster

    CERN Document Server

    Hung, Chao-Ling; Chiang, Yi-Kuan; Capak, Peter; Cowley, Michael J; Darvish, Behnam; Kacprzak, Glenn G; Kovac, K; Lilly, Simon J; Nanayakkara, Themiya; Spitler, Lee R; Tran, Kim-Vy H; Yuan, Tiantian

    2016-01-01

    The most prodigious starburst galaxies are absent in massive galaxy clusters today, but their connection with large scale environments is less clear at $z\\gtrsim2$. We present a search of large scale structure around a galaxy cluster core at $z=2.095$ using a set of spectroscopically confirmed galaxies. We find that both color-selected star-forming galaxies (SFGs) and dusty star-forming galaxies (DSFGs) show significant overdensities around the $z=2.095$ cluster. A total of 8 DSFGs (including 3 X-ray luminous active galactic nuclei, AGNs) and 34 SFGs are found within a 10 arcmin radius (corresponds to $\\sim$15 cMpc at $z\\sim2.1$) from the cluster center and within a redshift range of $\\Delta z=0.02$, which leads to galaxy overdensities of $\\delta_{\\rm DSFG}\\sim12.3$ and $\\delta_{\\rm SFG}\\sim2.8$. The cluster core and the extended DSFG- and SFG-rich structure together demonstrate an active cluster formation phase, in which the cluster is accreting a significant amount of material from large scale structure whi...

  20. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  1. Large-scale data mining pilot project in human genome

    Energy Technology Data Exchange (ETDEWEB)

    Musick, R.; Fidelis, R.; Slezak, T.

    1997-05-01

    This whitepaper briefly describes a new, aggressive effort in large- scale data Livermore National Labs. The implications of `large- scale` will be clarified Section. In the short term, this effort will focus on several @ssion-critical questions of Genome project. We will adapt current data mining techniques to the Genome domain, to quantify the accuracy of inference results, and lay the groundwork for a more extensive effort in large-scale data mining. A major aspect of the approach is that we will be fully-staffed data warehousing effort in the human Genome area. The long term goal is strong applications- oriented research program in large-@e data mining. The tools, skill set gained will be directly applicable to a wide spectrum of tasks involving a for large spatial and multidimensional data. This includes applications in ensuring non-proliferation, stockpile stewardship, enabling Global Ecology (Materials Database Industrial Ecology), advancing the Biosciences (Human Genome Project), and supporting data for others (Battlefield Management, Health Care).

  2. Large Scale Magnetic Fields: Density Power Spectrum in Redshift Space

    Indian Academy of Sciences (India)

    Rajesh Gopal; Shiv K. Sethi

    2003-09-01

    We compute the density redshift-space power spectrum in the presence of tangled magnetic fields and compare it with existing observations. Our analysis shows that if these magnetic fields originated in the early universe then it is possible to construct models for which the shape of the power spectrum agrees with the large scale slope of the observed power spectrum. However requiring compatibility with observed CMBR anisotropies, the normalization of the power spectrum is too low for magnetic fields to have significant impact on the large scale structure at present. Magnetic fields of a more recent origin generically give density power spectrum ∝ 4 which doesn’t agree with the shape of the observed power spectrum at any scale. Magnetic fields generate curl modes of the velocity field which increase both the quadrupole and hexadecapole of the redshift space power spectrum. For curl modes, the hexadecapole dominates over quadrupole. So the presence of curl modes could be indicated by an anomalously large hexadecapole, which has not yet been computed from observation. It appears difficult to construct models in which tangled magnetic fields could have played a major role in shaping the large scale structure in the present epoch. However if they did, one of the best ways to infer their presence would be from the redshift space effects in the density power spectrum.

  3. Critical thinking, politics on a large scale and media democracy

    Directory of Open Access Journals (Sweden)

    José Antonio IBÁÑEZ-MARTÍN

    2015-06-01

    Full Text Available The first approximation to the social current reality offers us numerous motives for the worry. The spectacle of violence and of immorality can scare us easily. But more worrying still it is to verify that the horizon of conviviality, peace and wellbeing that Europe had been developing from the Treaty of Rome of 1957 has compromised itself seriously for the economic crisis. Today we are before an assault to the democratic politics, which is qualified, on the part of the media democracy, as an exhausted system, which is required to be changed into a new and great politics, a politics on a large scale. The article analyses the concept of a politics on a large scale, primarily attending to Nietzsche, and noting its union with the great philosophy and the great education. The study of the texts of Nietzsche leads us to the conclusion of how in them we often find an interesting analysis of the problems and a misguided proposal for solutions. We cannot think to suggest solutions to all the problems, but we outline various proposals about changes of political activity, that reasonably are defended from the media democracy. In conclusion, we point out that a politics on a large scale requires statesmen, able to suggest modes of life in common that can structure a long-term coexistence.

  4. Star formation associated with a large-scale infrared bubble

    CERN Document Server

    Xu, Jin-Long

    2014-01-01

    Using the data from the Galactic Ring Survey (GRS) and Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE), we performed a study for a large-scale infrared bubble with a size of about 16 pc at a distance of 2.0 kpc. We present the 12CO J=1-0, 13CO J=1-0 and C18O J=1-0 observations of HII region G53.54-0.01 (Sh2-82) obtained at the the Purple Mountain Observation (PMO) 13.7 m radio telescope to investigate the detailed distribution of associated molecular material. The large-scale infrared bubble shows a half-shell morphology at 8 um. H II regions G53.54-0.01, G53.64+0.24, and G54.09-0.06 are situated on the bubble. Comparing the radio recombination line velocities and associated 13CO J=1-0 components of the three H II regions, we found that the 8 um emission associated with H II region G53.54-0.01 should belong to the foreground emission, and only overlap with the large-scale infrared bubble in the line of sight. Three extended green objects (EGOs, the candidate massive young stellar objects), ...

  5. Line segment extraction for large scale unorganized point clouds

    Science.gov (United States)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  6. Design and fabrication of a large-scale oedometer

    Institute of Scientific and Technical Information of China (English)

    Maryam Mokhtari; Nader Shariatmadari; Ali Akbar Heshmati R; Hossein Salehzadeh

    2015-01-01

    The most common apparatus used to investigate the load−deformation parameters of homogeneous fine-grained soils is a Casagrande-type oedometer. A typical Casagrande oedometer cell has an internal diameter of 76 mm and a height of 19 mm. However, the dimensions of this kind of apparatus do not meet the requirements of some civil engineering applications like studying load−deformation characteristics of specimens with large-diameter particles such as granular materials or municipal solid waste materials. Therefore, it is decided to design and develop a large-scale oedometer with an internal diameter of 490 mm. The new apparatus provides the possibility to evaluate the load−deformation characteristics of soil specimens with different diameter to height ratios. The designed apparatus is able to measure the coefficient of lateral earth pressure at rest. The details and capabilities of the developed oedometer are provided and discussed. To study the performance and efficiency, a number of consolidation tests were performed on Firoozkoh No. 161 sand using the newly developed large scale oedometer made and also the 50 mm diameter Casagrande oedometer. Benchmark test results show that measured consolidation parameters by large scale oedometer are comparable to values measured by Casagrande type oedometer.

  7. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  8. Searching for Large Scale Structure in Deep Radio Surveys

    CERN Document Server

    Baleisis, A; Loan, A J; Wall, J V; Baleisis, Audra; Lahav, Ofer; Loan, Andrew J.; Wall, Jasper V.

    1997-01-01

    (Abridged Abstract) We calculate the expected amplitude of the dipole and higher spherical harmonics in the angular distribution of radio galaxies. The median redshift of radio sources in existing catalogues is z=1, which allows us to study large scale structure on scales between those accessible to present optical and infrared surveys, and that of the Cosmic Microwave Background (CMB). The dipole is due to 2 effects which turn out to be of comparable magnitude: (i) our motion with respect to the CMB, and (ii) large scale structure, parameterised here by a family of Cold Dark Matter power-spectra. We make specific predictions for the Green Bank (87GB) and Parkes-MIT-NRAO (PMN) catalogues. For these relatively sparse catalogues both the motion and large scale structure dipole effects are expected to be smaller than the Poisson shot-noise. However, we detect dipole and higher harmonics in the combined 87GB-PMN catalogue which are far larger than expected. We attribute this to a 2 % flux mismatch between the two...

  9. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  10. BILGO: Bilateral greedy optimization for large scale semidefinite programming

    KAUST Repository

    Hao, Zhifeng

    2013-10-03

    Many machine learning tasks (e.g. metric and manifold learning problems) can be formulated as convex semidefinite programs. To enable the application of these tasks on a large-scale, scalability and computational efficiency are considered as desirable properties for a practical semidefinite programming algorithm. In this paper, we theoretically analyze a new bilateral greedy optimization (denoted BILGO) strategy in solving general semidefinite programs on large-scale datasets. As compared to existing methods, BILGO employs a bilateral search strategy during each optimization iteration. In such an iteration, the current semidefinite matrix solution is updated as a bilateral linear combination of the previous solution and a suitable rank-1 matrix, which can be efficiently computed from the leading eigenvector of the descent direction at this iteration. By optimizing for the coefficients of the bilateral combination, BILGO reduces the cost function in every iteration until the KKT conditions are fully satisfied, thus, it tends to converge to a global optimum. In fact, we prove that BILGO converges to the global optimal solution at a rate of O(1/k), where k is the iteration counter. The algorithm thus successfully combines the efficiency of conventional rank-1 update algorithms and the effectiveness of gradient descent. Moreover, BILGO can be easily extended to handle low rank constraints. To validate the effectiveness and efficiency of BILGO, we apply it to two important machine learning tasks, namely Mahalanobis metric learning and maximum variance unfolding. Extensive experimental results clearly demonstrate that BILGO can solve large-scale semidefinite programs efficiently.

  11. Robust regression for large-scale neuroimaging studies.

    Science.gov (United States)

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies.

  12. Foundational perspectives on causality in large-scale brain networks.

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  13. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  14. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  15. Status of large-scale analysis of post-translational modifications by mass spectrometry

    DEFF Research Database (Denmark)

    Olsen, Jesper V; Mann, Matthias

    2013-01-01

    , but it has only been in the last ten years that mass spectrometry (MS)-based proteomics has begun to reveal the true extent of the PTM universe. In this overview for the special PTM issue in Molecular and Cellular Proteomics, we take stock of where MS-based proteomics stands in the large-scale analysis......, with label-free methods showing particular promise. It is also becoming possible to determine the absolute occupancy or stoichiometry of PTMS sites on a large scale. Powerful software for the bioinformatic analysis of thousands of PTM sites has been developed. However, a complete inventory of sites has...... not been established for any PTM and this situation will persist into the foreseeable future. Furthermore, although PTM coverage by MS-based methods is impressive, it still needs to be improved, especially in tissues and in clinically relevant systems. The central challenge for the field is to develop...

  16. Predicting protein functions from redundancies in large-scale protein interaction networks

    Science.gov (United States)

    Samanta, Manoj Pratim; Liang, Shoudan

    2003-01-01

    Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.

  17. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen;

    2007-01-01

    is exposed to preheated inlet air while the top of the bed resides within the furnace. Mathematical modeling is an efficient way to understand and improve the operation and design of combustion systems. Compared to modeling of pulverized fuel furnaces, CFD modeling of biomass-fired grate furnaces...... is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... quite much with the conditions in the real furnace. Combustion instabilities in the fuel bed impose big challenges to give reliable grate inlet BCs for the CFD modeling; the deposits formed on furnace walls and air nozzles make it difficult to define precisely the wall BCs and air jet BCs...

  18. Estimation Source Parameters of Large-Scale Chemical Surface Explosions and Recent Underground Nuclear Tests

    Science.gov (United States)

    Gitterman, Y.; Kim, S.; Hofstetter, R.

    2013-12-01

    Large-scale surface explosions were conducted by the Geophysical Institute of Israel at Sayarim Military Range (SMR), Negev desert: 82 tons of strong HE explosives in August 2009, and 10&100 tons of ANFO explosives in January 2011. The main goal was to provide strong controlled sources in different wind conditions, for calibration of IMS infrasound stations. Numerous dense observations of blast waves were provided by high-pressure, acoustic and seismic sensors at near-source ( 2000 tons) ANFO surface shots at White Sands Military Range (WSMR) were analyzed for SS time delay. The Secondary Shocks were revealed on the records in the range 1.5-60 km and showed consistency with the SMR data, thus extending the charge and distance range for the developed SS delay relationship. Obtained results suggest that measured SS delays can provide important information about an explosion source character, and can be used as a new simple cost-effective yield estimator for explosions with known type of explosives. The new results are compared with analogous available data of surface nuclear explosions. Special distinctions in air-blast waves are revealed and analyzed, resulting from the different source phenomenology (energy release). Two underground nuclear explosions conducted by North Korea in 2009 and 2013 were recorded by several stations of Israel Seismic Network. Pronounced minima (spectral nulls) at 1.2-1.3 Hz were revealed in the spectra of teleseismic P-waves. For a ground-truth explosion with a shallow source depth (relatively to an earthquake), this phenomenon can be interpreted in terms of the interference between the down-going P-wave energy and the pP phase reflected from the Earth's surface. A similar effect was observed before at ISN stations for the Pakistan explosion (28.05.98) at a different frequency 1.7 Hz indicating the source- and not site-effect. Based on the null frequency dependency on the near-surface acoustic velocity and the source depth, the depth of

  19. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    Energy Technology Data Exchange (ETDEWEB)

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  20. Statistical Modeling of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  1. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  2. Large scale PV plants - also in Denmark. Project report

    Energy Technology Data Exchange (ETDEWEB)

    Ahm, P. (PA Energy, Malling (Denmark)); Vedde, J. (SiCon. Silicon and PV consulting, Birkeroed (Denmark))

    2011-04-15

    Large scale PV (LPV) plants, plants with a capacity of more than 200 kW, has since 2007 constituted an increasing share of the global PV installations. In 2009 large scale PV plants with cumulative power more that 1,3 GWp were connected to the grid. The necessary design data for LPV plants in Denmark are available or can be found, although irradiance data could be improved. There seems to be very few institutional barriers for LPV projects, but as so far no real LPV projects have been processed, these findings have to be regarded as preliminary. The fast growing number of very large scale solar thermal plants for district heating applications supports these findings. It has further been investigated, how to optimize the lay-out of LPV plants. Under the Danish irradiance conditions with several winter months with very low solar height PV installations on flat surfaces will have to balance the requirements of physical space - and cost, and the loss of electricity production due to shadowing effects. The potential for LPV plants in Denmark are found in three main categories: PV installations on flat roof of large commercial buildings, PV installations on other large scale infrastructure such as noise barriers and ground mounted PV installations. The technical potential for all three categories is found to be significant and in the range of 50 - 250 km2. In terms of energy harvest PV plants will under Danish conditions exhibit an overall efficiency of about 10 % in conversion of the energy content of the light compared to about 0,3 % for biomass. The theoretical ground area needed to produce the present annual electricity consumption of Denmark at 33-35 TWh is about 300 km2 The Danish grid codes and the electricity safety regulations mention very little about PV and nothing about LPV plants. It is expected that LPV plants will be treated similarly to big wind turbines. A number of LPV plant scenarios have been investigated in detail based on real commercial offers and

  3. GroFi: Large-scale fiber placement research facility

    OpenAIRE

    Krombholz, Christian; Kruse, Felix; Wiedemann, Martin

    2016-01-01

    GroFi is a large research facility operated by the German Aerospace Center’s Center for Lightweight-Production-Technology in Stade. A combination of dierent layup technologies namely (dry) ber placement and tape laying, allows the development and validation of new production technologiesand processes for large-scale composite components. Due to the use of coordinated and simultaneously working layup units a high exibility of the research platform is achieved. This allows the investigation of ...

  4. Enabling Large-Scale Biomedical Analysis in the Cloud

    Directory of Open Access Journals (Sweden)

    Ying-Chih Lin

    2013-01-01

    Full Text Available Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.

  5. Large-scale magnetic fields from inflation in teleparallel gravity

    CERN Document Server

    Bamba, Kazuharu; Luo, Ling-Wei

    2013-01-01

    Generation of large-scale magnetic fields in inflationary cosmology is studied in teleparallelism, where instead of the scalar curvature in general relativity, the torsion scalar describes the gravity theory. In particular, we investigate a coupling of the electromagnetic field to the torsion scalar during inflation, which leads to the breaking of conformal invariance of the electromagnetic field. We demonstrate that for a power-law type coupling, the current magnetic field strength of $\\sim 10^{-9}$ G on 1 Mpc scale can be generated, if the backreaction effects and strong coupling problem are not taken into consideration.

  6. Large-scale glaciation on Earth and on Mars

    OpenAIRE

    Greve, Ralf

    2007-01-01

    This habilitation thesis combines ten publications of the author which are concerned with the large-scale dynamics and thermodynamics of ice sheets and ice shelves. Ice sheets are ice masses with a minimum area of 50,000 km2 which rest on solid land, whereas ice shelves consist of floating ice nourished by the mass flow from an adjacent ice sheet, typically stabilized by large bays. Together, they represent the major part of the cryosphere of the Earth. Furthermore, ice on Earth occurs in the...

  7. Hijacking Bitcoin: Large-scale Network Attacks on Cryptocurrencies

    OpenAIRE

    Apostolaki, Maria; Zohar, Aviv; Vanbever, Laurent

    2016-01-01

    Bitcoin is without a doubt the most successful cryptocurrency in circulation today, making it an extremely valuable target for attackers. Indeed, many studies have highlighted ways to compromise one or several Bitcoin nodes. In this paper, we take a different perspective and study the effect of large-scale network-level attacks such as the ones that may be launched by Autonomous Systems (ASes). We show that attacks that are commonly believed to be hard, such as isolating 50% of the mining pow...

  8. Cosmological parameters from large scale structure - geometric versus shape information

    CERN Document Server

    Hamann, Jan; Lesgourgues, Julien; Rampf, Cornelius; Wong, Yvonne Y Y

    2010-01-01

    The matter power spectrum as derived from large scale structure (LSS) surveys contains two important and distinct pieces of information: an overall smooth shape and the imprint of baryon acoustic oscillations (BAO). We investigate the separate impact of these two types of information on cosmological parameter estimation, and show that for the simplest cosmological models, the broad-band shape information currently contained in the SDSS DR7 halo power spectrum (HPS) is by far superseded by geometric information derived from the baryonic features. An immediate corollary is that contrary to popular beliefs, the upper limit on the neutrino mass m_\

  9. Practical Optimal Control of Large-scale Water Distribution Network

    Institute of Scientific and Technical Information of China (English)

    Lv Mou(吕谋); Song Shuang

    2004-01-01

    According to the network characteristics and actual state of the water supply system in China, the implicit model, which can be solved by the hierarchical optimization method, was established. In special, based on the analyses of the water supply system containing variable-speed pumps, a software has been developed successfully. The application of this model to the city of Hangzhou (China) was compared to experiential strategy. The results of this study showed that the developed model is a promising optimization method to control the large-scale water supply systems.

  10. Floodplain management in Africa: Large scale analysis of flood data

    Science.gov (United States)

    Padi, Philip Tetteh; Baldassarre, Giuliano Di; Castellarin, Attilio

    2011-01-01

    To mitigate a continuously increasing flood risk in Africa, sustainable actions are urgently needed. In this context, we describe a comprehensive statistical analysis of flood data in the African continent. The study refers to quality-controlled, large and consistent databases of flood data, i.e. maximum discharge value and times series of annual maximum flows. Probabilistic envelope curves are derived for the African continent by means of a large scale regional analysis. Moreover, some initial insights on the statistical characteristics of African floods are provided. The results of this study are relevant and can be used to get some indications to support flood management in Africa.

  11. Large scale obscuration and related climate effects open literature bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  12. Application of methanol synthesis reactor to large-scale plants

    Institute of Scientific and Technical Information of China (English)

    LOU Ren; XU Rong-liang; LOU Shou-lin

    2006-01-01

    The developing status of world large-scale methanol production technology is analyzed and Linda's JW low-pressure methanol synthesis reactor with uniform temperature is described. JW serial reactors have been successfully introduced in and applied in Harbin Gasification Plant and the productivity has been increased by 50% and now nine sets of equipments are successfully running in Harbin Gasification Plant,Jiangsu Xinya, Shandong Kenli,Henan Zhongyuan, Handan Xinyangguang,' Shanxi Weihua and Inner Mongolia Tianye. Now it has manufacturing the reactors of 300,000 t/a for Liaoning Dahua. Some solutions for the structure problems of 1000 ~5000 t/d methanol synthesis rectors are put forward.

  13. An Atmospheric Large-Scale Cold Plasma Jet

    Institute of Scientific and Technical Information of China (English)

    吕晓桂; 任春生; 马腾才; 冯岩; 王德真

    2012-01-01

    This letter reports on the generation and characteristics of a large-scale dielectric barrier discharge plasma jet at atmospheric pressure. With appropriate parameters, diffuse plasma with a 50×5 mm2 cross-sectional area is obtained. The characteristics of the discharges are diag- nosed by using electrical and optical methods. In addition to being generated in helium, plasma is also generated in a mixed gas of helium and oxygen. The oxygen atomic radiant intensity (3p5P→ 3s5S, 3p3P→3s3S transition) is not proportional to the proportion of oxygen in the gas mixture, as shown by the experimental results.

  14. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred

    2010-08-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.

  15. Synthesis and sensing application of large scale bilayer graphene

    Science.gov (United States)

    Hong, Sung Ju; Yoo, Jung Hoon; Baek, Seung Jae; Park, Yung Woo

    2012-02-01

    We have synthesized large scale bilayer graphene by using Chemical Vapor Deposition (CVD) in atmospheric pressure. Bilayer graphene was grown by using CH4, H2 and Ar gases. The growth temperature was 1050^o. Conventional FET measurement shows ambipolar transfer characteristics. Results of Raman spectroscopy, Atomic Force microscope (AFM) and Transmission Electron Microscope (TEM) indicate the film is bilayer graphene. Especially, adlayer structure which interrupt uniformity was reduced in low methane flow condition. Furthermore, large size CVD bilayer graphene film can be investigated to apply sensor devices. By using conventional photolithography process, we have fabricated device array structure and studied sensing behavior.

  16. Large scale solar cooling plants in America, Asia and Europe

    Energy Technology Data Exchange (ETDEWEB)

    Holter, Christian; Olsacher, Nicole [S.O.L.I.D. GmbH, Graz (Austria)

    2010-07-01

    Large scale solar cooling plants with an area between 120 - 1600 m{sup 2} are representative examples to illustrate S.O.L.I.D.'s experiences. The selected three reference solar cooling plants are located on three different continents: America, Asia and Europe. Every region has different framework conditions and its unforeseen challenges but professional experience and innovative ideas form the basis that each plant is operating well and satisfying the customer's demand. This verifies that solar cooling already is a proven technology. (orig.)

  17. ROSA-IV large scale test facility (LSTF) system description

    International Nuclear Information System (INIS)

    The ROSA-IV Program's large scale test facility (LSTF) is a test facility for integral simulation of thermal-hydraulic response of a pressurized water reactor (PWR) during a small break loss-of-coolant accident (LOCA) or an operational transient. This document provides the necessary background information to interpret the experimental data obtained from the LSTF experiments. The information provided includes LSTF test objectives and approach, the LSTF design philosopy, the component and geometry description, the instrumentation and data acquisition system description, and the outline of experiments to be performed. (author)

  18. Large-Scale Environmental Effects of the Cluster Distribution

    CERN Document Server

    Plionis, M

    2001-01-01

    Using the APM cluster distribution we find interesting alignment effects: (1) Cluster substructure is strongly correlated with the tendency of clusters to be aligned with their nearest neighbour and in general with the nearby clusters that belong to the same supercluster, (2) Clusters belonging in superclusters show a statistical significant tendency to be aligned with the major axis orientation of their parent supercluster. Furthermore we find that dynamically young clusters are more clustered than the overall cluster population. These are strong indications that cluster develop in a hierarchical fashion by merging along the large-scale filamentary superclusters within which they are embedded.

  19. Large-Scale Self-Consistent Nuclear Mass Calculations

    CERN Document Server

    Stoitsov, M V; Dobaczewski, J; Nazarewicz, W

    2006-01-01

    The program of systematic large-scale self-consistent nuclear mass calculations that is based on the nuclear density functional theory represents a rich scientific agenda that is closely aligned with the main research directions in modern nuclear structure and astrophysics, especially the radioactive nuclear beam physics. The quest for the microscopic understanding of the phenomenon of nuclear binding represents, in fact, a number of fundamental and crucial questions of the quantum many-body problem, including the proper treatment of correlations and dynamics in the presence of symmetry breaking. Recent advances and open problems in the field of nuclear mass calculations are presented and discussed.

  20. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  1. Solar cycle changes of large-scale solar wind structure

    OpenAIRE

    Manoharan, P. K

    2011-01-01

    In this paper, I present the results on large-scale evolution of density turbulence of solar wind in the inner heliosphere during 1985 - 2009. At a given distance from the Sun, the density turbulence is maximum around the maximum phase of the solar cycle and it reduces to ~70%, near the minimum phase. However, in the current minimum of solar activity, the level of turbulence has gradually decreased, starting from the year 2005, to the present level of ~30%. These results suggest that the sour...

  2. Laser Welding of Large Scale Stainless Steel Aircraft Structures

    Science.gov (United States)

    Reitemeyer, D.; Schultz, V.; Syassen, F.; Seefeld, T.; Vollertsen, F.

    In this paper a welding process for large scale stainless steel structures is presented. The process was developed according to the requirements of an aircraft application. Therefore, stringers are welded on a skin sheet in a t-joint configuration. The 0.6 mm thickness parts are welded with a thin disc laser, seam length up to 1920 mm are demonstrated. The welding process causes angular distortions of the skin sheet which are compensated by a subsequent laser straightening process. Based on a model straightening process parameters matching the induced welding distortion are predicted. The process combination is successfully applied to stringer stiffened specimens.

  3. Petascale computations for Large-scale Atomic and Molecular collisions

    CERN Document Server

    McLaughlin, Brendan M

    2014-01-01

    Petaflop architectures are currently being utilized efficiently to perform large scale computations in Atomic, Molecular and Optical Collisions. We solve the Schroedinger or Dirac equation for the appropriate collision problem using the R-matrix or R-matrix with pseudo-states approach. We briefly outline the parallel methodology used and implemented for the current suite of Breit-Pauli and DARC codes. Various examples are shown of our theoretical results compared with those obtained from Synchrotron Radiation facilities and from Satellite observations. We also indicate future directions and implementation of the R-matrix codes on emerging GPU architectures.

  4. Search for Large Scale Anisotropies with the Pierre Auger Observatory

    Science.gov (United States)

    Bonino, R.; Pierre Auger Collaboration

    The Pierre Auger Observatory studies the nature and the origin of Ultra High Energy Cosmic Rays (>3\\cdot1018 eV). Completed at the end of 2008, it has been continuously operating for more than six years. Using data collected from 1 January 2004 until 31 March 2009, we search for large scale anisotropies with two complementary analyses in different energy windows. No significant anisotropies are observed, resulting in bounds on the first harmonic amplitude at the 1% level at EeV energies.

  5. Simple Method for Large-Scale Fabrication of Plasmonic Structures

    CERN Document Server

    Makarov, Sergey V; Mukhin, Ivan S; Shishkin, Ivan I; Mozharov, Alexey M; Krasnok, Alexander E; Belov, Pavel A

    2015-01-01

    A novel method for single-step, lithography-free, and large-scale laser writing of nanoparticle-based plasmonic structures has been developed. Changing energy of femtosecond laser pulses and thickness of irradiated gold film it is possible to vary diameter of the gold nanoparticles, while the distance between them can be varied by laser scanning parameters. This method has an advantage over the most previously demonstrated methods in its simplicity and versatility, while the quality of the structures is good enough for many applications. In particular, resonant light absorbtion/scattering and surface-enhanced Raman scattering have been demonstrated on the fabricated nanostructures.

  6. Large scale obscuration and related climate effects open literature bibliography

    International Nuclear Information System (INIS)

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ''Nuclear Winter Controversy'' in the early 1980's. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest

  7. Reliability Evaluation considering Structures of a Large Scale Wind Farm

    DEFF Research Database (Denmark)

    Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei;

    2012-01-01

    Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and the scale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability...... wind farm which is able to enhance a capability of delivering a power instead of controlling an uncontrollable output of wind power. Therefore, this paper introduces a method to evaluate the reliability depending upon structures of wind farm and to reflect the result to the planning stage of wind farm....

  8. Testing Dark Energy Models through Large Scale Structure

    CERN Document Server

    Avsajanishvili, Olga; Arkhipova, Natalia A; Kahniashvili, Tina

    2015-01-01

    We explore the scalar field quintessence freezing model of dark energy with the inverse Ratra-Peebles potential. We study the cosmic expansion and the large scale structure growth rate. We use recent measurements of the growth rate and the baryon acoustic oscillation peak positions to constrain the matter density $\\Omega_\\mathrm{m}$ parameter and the model parameter $\\alpha$ that describes the steepness of the scalar field potential. We solve jointly the equations for the background expansion and for the growth rate of matter perturbations. The obtained theoretical results are compared with the observational data. We perform the Baysian data analysis to derive constraints on the model parameters.

  9. Quantum computation for large-scale image classification

    Science.gov (United States)

    Ruan, Yue; Chen, Hanwu; Tan, Jianing; Li, Xi

    2016-10-01

    Due to the lack of an effective quantum feature extraction method, there is currently no effective way to perform quantum image classification or recognition. In this paper, for the first time, a global quantum feature extraction method based on Schmidt decomposition is proposed. A revised quantum learning algorithm is also proposed that will classify images by computing the Hamming distance of these features. From the experimental results derived from the benchmark database Caltech 101, and an analysis of the algorithm, an effective approach to large-scale image classification is derived and proposed against the background of big data.

  10. Large-Scale Purification of Peroxisomes for Preparative Applications.

    Science.gov (United States)

    Cramer, Jana; Effelsberg, Daniel; Girzalsky, Wolfgang; Erdmann, Ralf

    2015-09-01

    This protocol is designed for large-scale isolation of highly purified peroxisomes from Saccharomyces cerevisiae using two consecutive density gradient centrifugations. Instructions are provided for harvesting up to 60 g of oleic acid-induced yeast cells for the preparation of spheroplasts and generation of organellar pellets (OPs) enriched in peroxisomes and mitochondria. The OPs are loaded onto eight continuous 36%-68% (w/v) sucrose gradients. After centrifugation, the peak peroxisomal fractions are determined by measurement of catalase activity. These fractions are subsequently pooled and subjected to a second density gradient centrifugation using 20%-40% (w/v) Nycodenz. PMID:26330621

  11. A relativistic view on large scale N-body simulations

    International Nuclear Information System (INIS)

    We discuss the relation between the output of Newtonian N-body simulations on scales that approach or exceed the particle horizon to the description of general relativity. At leading order, the Zeldovich approximation is correct on large scales, coinciding with the general relativistic result. At second order in the initial metric potential, the trajectories of particles deviate from the second order Newtonian result and hence the validity of second order Lagrangian perturbation theory initial conditions should be reassessed when used in very large simulations. We also advocate using the expression for the synchronous gauge density as a well behaved measure of density fluctuations on such scales. (paper)

  12. Controlled growth of large-scale silver nanowires

    Institute of Scientific and Technical Information of China (English)

    Xiao Cong-Wen; Yang Hai-Tao; Shen Cheng-Min; Li Zi-An; Zhang Huai-Ruo; Liu Fei; Yang Tian-Zhong; Chen Shu-Tang; Gao Hong-Jun

    2005-01-01

    Large-scale silver nanowires with controlled aspect ratio were synthesized via reducing silver nitrate with 1, 2-propanediol in the presence of poly (vinyl pyrrolidone) (PVP). Scanning electron microscopy, transmission electron microscopy and x-ray powder diffraction were employed to characterize these silver nanowires. The diameter of the silver nanowires can be readily controlled in the range of 100 to 400 nm by varying the experimental conditions. X-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy results show that there exists no chemical bond between the silver and the nitrogen atoms. The interaction between PVP and silver nanowires is mainly through the oxygen atom in the carbonyl group.

  13. Climate variability rather than overstocking causes recent large scale cover changes of Tibetan pastures

    Science.gov (United States)

    Lehnert, Lukas; Wesche, Karsten; Trachte, Katja; Reudenbach, Christoph; Miehe, Georg; Bendix, Jörg

    2016-04-01

    The Tibetan Plateau has been entitled "Third-Pole-Environment" because of its outstanding importance for the climate and the hydrology in East and South-east Asia. Its climatological and hydrological influences are strongly affected by the local grassland vegetation which is supposed to be subject to ongoing degradation. On a local scale, numerous studies focused on grassland degradation of the Tibetan pastures. However, because methods and scales substantially differed among previous studies, the overall pattern of the degradation in the Tibetan Plateau is unknown. Consequently, a satellite based approach was selected to cope with the spatial limitations. Therefore, a MODIS-based vegetation cover product was developed which is fully validated against 600 in situ measurements covering a wide extent of the Tibetan Plateau. The vegetation cover as a proxy for grassland degradation is modelled with low error rates using support vector machine regressions. To identify the changes in the vegetation cover, the trends seen in the new vegetation cover product since the beginning of the new millennium were analysed. The drivers of the vegetation changes were identified by the analysis of trends of climatic variables (precipitation and 2 m air temperature) and land-use (livestock numbers) over the same time. The results reveal that - in contrast to the prevailing opinion - pasture degradation on the Tibetan Plateau is not a generally proceeding process because areas of positive and negative changes are almost equal in extent. The positive and negative vegetation changes have regionally different triggers: While, from 2000 on, the vegetation cover has increased in the north-eastern part of the Tibetan Plateau due to increasing precipitation, it has declined in the central and western parts due to rising air temperature and declining precipitation. Increasing livestock numbers as a result of land use changes exacerbated the negative trends but, contrarily to the assumptions of

  14. Relationship between Eurasian large-scale patterns and regional climate variability over the Black and Baltic Seas

    Energy Technology Data Exchange (ETDEWEB)

    Stankunavicius, G.; Pupienis, D. [Vilnius Univ. (Lithuania). Dept. of Hydrology and Climatology; Basharin, D. [National Academy of Science of Ukraine, Sevastopol (Ukraine). Sevastopol Marine Hydrophysical Inst.

    2012-11-01

    Using a NCEP/NCAR Reanalysis dataset and the empirical orthogonal function (EOF) analysis approach we studied interannual to decadal variabilities of the sea-level air pressure (SLP) and the surface air temperature (SAT) fields over Eurasia during the 2nd part of the 20th century. Our results agree with those of the previous studies, which conclude that Eurasian trends are the result of storm-path changes driven by the interdecadal behaviour of the NAO-like meridional dipole pattern in the Atlantic. On interannual and decadal time scales, significant synchronous correlations between correspondent modes of SAT and SLP EOF patterns were found. This fact suggests that there is a strong and stable Eurasian interrelationship between SAT and SLP large-scale fields which affects the local climate of two sub-regions: the Black and Baltic Seas. The climate variability in these sub-regions was studied in terms of Eurasian large-scale surface-temperature and air-pressure patterns responses. We concluded that the sub-regional climate variability substantially differs over the Black and Baltic Seas, and depends on different Eurasian large-scale patterns. We showed that the Baltic Sea region is influenced by the patterns arising primary from NAO-like meridional dipole, as well as Scandinavian patterns, while the Black Sea's SAT/SLP variability is influenced mainly by the second mode EOF (eastern Atlantic) and large scale tropospheric wave structures. (orig.)

  15. The large scale magnetic fields of thin accretion disks

    CERN Document Server

    Cao, Xinwu

    2013-01-01

    Large scale magnetic field threading an accretion disk is a key ingredient in the jet formation model. The most attractive scenario for the origin of such a large scale field is the advection of the field by the gas in the accretion disk from the interstellar medium or a companion star. However, it is realized that outward diffusion of the accreted field is fast compared to the inward accretion velocity in a geometrically thin accretion disk if the value of the Prandtl number Pm is around unity. In this work, we revisit this problem considering the angular momentum of the disk is removed predominantly by the magnetically driven outflows. The radial velocity of the disk is significantly increased due to the presence of the outflows. Using a simplified model for the vertical disk structure, we find that even moderately weak fields can cause sufficient angular momentum loss via a magnetic wind to balance outward diffusion. There are two equilibrium points, one at low field strengths corresponding to a plasma-bet...

  16. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping

    Science.gov (United States)

    Georgopoulos, A.; Oikonomou, C.; Adamopoulos, E.; Stathopoulou, E. K.

    2016-06-01

    When it comes to large scale mapping of limited areas especially for cultural heritage sites, things become critical. Optical and non-optical sensors are developed to such sizes and weights that can be lifted by such platforms, like e.g. LiDAR units. At the same time there is an increase in emphasis on solutions that enable users to get access to 3D information faster and cheaper. Considering the multitude of platforms, cameras and the advancement of algorithms in conjunction with the increase of available computing power this challenge should and indeed is further investigated. In this paper a short review of the UAS technologies today is attempted. A discussion follows as to their applicability and advantages, depending on their specifications, which vary immensely. The on-board cameras available are also compared and evaluated for large scale mapping. Furthermore a thorough analysis, review and experimentation with different software implementations of Structure from Motion and Multiple View Stereo algorithms, able to process such dense and mostly unordered sequence of digital images is also conducted and presented. As test data set, we use a rich optical and thermal data set from both fixed wing and multi-rotor platforms over an archaeological excavation with adverse height variations and using different cameras. Dense 3D point clouds, digital terrain models and orthophotos have been produced and evaluated for their radiometric as well as metric qualities.

  17. Alignment of quasar polarizations with large-scale structures

    Science.gov (United States)

    Hutsemékers, D.; Braibant, L.; Pelgrims, V.; Sluse, D.

    2014-12-01

    We have measured the optical linear polarization of quasars belonging to Gpc scale quasar groups at redshift z ~ 1.3. Out of 93 quasars observed, 19 are significantly polarized. We found that quasar polarization vectors are either parallel or perpendicular to the directions of the large-scale structures to which they belong. Statistical tests indicate that the probability that this effect can be attributed to randomly oriented polarization vectors is on the order of 1%. We also found that quasars with polarization perpendicular to the host structure preferentially have large emission line widths while objects with polarization parallel to the host structure preferentially have small emission line widths. Considering that quasar polarization is usually either parallel or perpendicular to the accretion disk axis depending on the inclination with respect to the line of sight, and that broader emission lines originate from quasars seen at higher inclinations, we conclude that quasar spin axes are likely parallel to their host large-scale structures. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under program ID 092.A-0221.Table 1 is available in electronic form at http://www.aanda.org

  18. Simulating the Large-Scale Structure of HI Intensity Maps

    CERN Document Server

    Seehars, Sebastian; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2015-01-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations, the halo model, and a phenomenological prescription for assigning HI mass to halos. The simulations span a redshift range of 0.35 < z < 0.9 in redshift bins of width $\\Delta z \\approx 0.05$ and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects on the angular clustering of HI. We apply and compare several estimators for the angular power spectrum and its covariance. We verify that they agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  19. Simulating the large-scale structure of HI intensity maps

    Science.gov (United States)

    Seehars, Sebastian; Paranjape, Aseem; Witzemann, Amadeus; Refregier, Alexandre; Amara, Adam; Akeret, Joel

    2016-03-01

    Intensity mapping of neutral hydrogen (HI) is a promising observational probe of cosmology and large-scale structure. We present wide field simulations of HI intensity maps based on N-body simulations of a 2.6 Gpc / h box with 20483 particles (particle mass 1.6 × 1011 Msolar / h). Using a conditional mass function to populate the simulated dark matter density field with halos below the mass resolution of the simulation (108 Msolar / h assign HI to those halos according to a phenomenological halo to HI mass relation. The simulations span a redshift range of 0.35 lesssim z lesssim 0.9 in redshift bins of width Δ z ≈ 0.05 and cover a quarter of the sky at an angular resolution of about 7'. We use the simulated intensity maps to study the impact of non-linear effects and redshift space distortions on the angular clustering of HI. Focusing on the autocorrelations of the maps, we apply and compare several estimators for the angular power spectrum and its covariance. We verify that these estimators agree with analytic predictions on large scales and study the validity of approximations based on Gaussian random fields, particularly in the context of the covariance. We discuss how our results and the simulated maps can be useful for planning and interpreting future HI intensity mapping surveys.

  20. Survey of large-scale isotope applications: nuclear technology field

    Energy Technology Data Exchange (ETDEWEB)

    Dewitt, R.

    1977-01-21

    A preliminary literature survey of potential large-scale isotope applications was made according to topical fields; i.e., nuclear, biological, medical, environmental, agricultural, geological, and industrial. Other than the possible expansion of established large-scale isotope applications such as uranium, boron, lithium, and hydrogen, no new immediate isotope usage appears to be developing. Over the long term a change in emphasis for isotope applications was identified which appears to be more responsive to societal concerns for health, the environment, and the conservation of materials and energy. For gram-scale applications, a variety of isotopes may be required for use as nonradioactive ''activable'' tracers. A more detailed survey of the nuclear field identified a potential need for large amounts (tons) of special isotopic materials for advanced reactor components and structures. At this need for special materials and the development of efficient separation methods progresses, the utilization of isotopes from nuclear wastes for beneficial uses should also progress.

  1. Large-scale magnetic fields in magnetohydrodynamic turbulence.

    Science.gov (United States)

    Alexakis, Alexandros

    2013-02-22

    High Reynolds number magnetohydrodynamic turbulence in the presence of zero-flux large-scale magnetic fields is investigated as a function of the magnetic field strength. For a variety of flow configurations, the energy dissipation rate [symbol: see text] follows the scaling [Symbol: see text] proportional U(rms)(3)/ℓ even when the large-scale magnetic field energy is twenty times larger than the kinetic energy. A further increase of the magnetic energy showed a transition to the [Symbol: see text] proportional U(rms)(2) B(rms)/ℓ scaling implying that magnetic shear becomes more efficient at this point at cascading the energy than the velocity fluctuations. Strongly helical configurations form nonturbulent helicity condensates that deviate from these scalings. Weak turbulence scaling was absent from the investigation. Finally, the magnetic energy spectra support the Kolmogorov spectrum k(-5/3) while kinetic energy spectra are closer to the Iroshnikov-Kraichnan spectrum k(-3/2) as observed in the solar wind.

  2. Online education in a large scale rehabilitation institution.

    Science.gov (United States)

    Mazzoleni, M Cristina; Rognoni, Carla; Pagani, Marco; Imbriani, Marcello

    2012-01-01

    Large scale multiple venue institutions face problems when delivering educations to their healthcare staff. The present study is aimed at evaluating the feasibility of relying on e-learning for at least part of the training of the Salvatore Maugeri Foundation healthcare staff. The paper reports the results of the delivery of e-learning courses to the personnel during a span of time of 7 months in order to assess the attitude to online courses attendance, the proportion between administered online education and administered traditional education, the economic sustainability of the online education delivery process. 37% of the total healthcare staff have attended online courses and 46% of nurses have proved to be the very active. The ratio between total number of credits and total number of courses for online and traditional education are respectively 18268/5 and 20354/96. These results point out that eLearning is not at all a niche tool used (or usable) by a limited number of people. Economic sustainability, assessed via personnel work hour saving, has been demonstrated. When distance learning is appropriate, online education is an effective, sustainable, well accepted mean to support and promote healthcare staff's education in a large scale institution. PMID:22491113

  3. Large scale environments of z<0.4 active galaxies

    CERN Document Server

    Lietzen, H; Nurmi, P; Liivamägi, L J; Saar, E; Tago, E; Takalo, L O; Einasto, M

    2011-01-01

    Properties of galaxies depend on their large-scale environment. As the influence of active galactic nuclei (AGN) in galaxy evolution is becoming more evident, their large scale environments may help us understand the evolutionary processes leading to activity. The effect of activity can be seen particularly by showing if different types of active galaxies are formed by similar mechanisms. Our aim is to study the supercluster-scale environments of active galaxies up to redshift 0.4. Our data includes quasars, BL Lac objects, Seyfert and radio galaxies. We use a three-dimensional low-resolution luminosity-density field constructed of a sample of luminous red galaxies in the seventh data release of the Sloan Digital Sky Survey. We calculate the average density of this field in a volume of a 3\\,$h^{-1}$Mpc sphere around each AGN for estimating the environmental density levels of different types of AGN. This analysis gives us the distribution of AGN in the global environment of superclusters, filaments, and voids....

  4. Ecohydrological modeling for large-scale environmental impact assessment.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. PMID:26595397

  5. Using Large Scale Structure to test Multifield Inflation

    CERN Document Server

    Ferraro, Simone

    2014-01-01

    Primordial non-Gaussianity of local type is known to produce a scale-dependent contribution to the galaxy bias. Several classes of multi-field inflationary models predict non-Gaussian bias which is stochastic, in the sense that dark matter and halos don't trace each other perfectly on large scales. In this work, we forecast the ability of next-generation Large Scale Structure surveys to constrain common types of primordial non-Gaussianity like $f_{NL}$, $g_{NL}$ and $\\tau_{NL}$ using halo bias, including stochastic contributions. We provide fitting functions for statistical errors on these parameters which can be used for rapid forecasting or survey optimization. A next-generation survey with volume $V = 25 h^{-3}$Mpc$^3$, median redshift $z = 0.7$ and mean bias $b_g = 2.5$, can achieve $\\sigma(f_{NL}) = 6$, $\\sigma(g_{NL}) = 10^5$ and $\\sigma(\\tau_{NL}) = 10^3$ if no mass information is available. If halo masses are available, we show that optimally weighting the halo field in order to reduce sample variance...

  6. Summarizing Large-Scale Database Schema Using Community Detection

    Institute of Scientific and Technical Information of China (English)

    Xue Wang; Xuan Zhou; Shan Wang

    2012-01-01

    Schema summarization on large-scale databases is a challenge.In a typical large database schema,a great proportion of the tables are closely connected through a few high degree tables.It is thus difficult to separate these tables into clusters that represent different topics.Moreover,as a schema can be very big,the schema summary needs to be structured into multiple levels,to further improve the usability.In this paper,we introduce a new schema summarization approach utilizing the techniques of community detection in social networks.Our approach contains three steps.First,we use a community detection algorithm to divide a database schema into subject groups,each representing a specific subject.Second,we cluster the subject groups into abstract domains to form a multi-level navigation structure.Third,we discover representative tables in each cluster to label the schema summary.We evaluate our approach on Freebase,a real world large-scale database.The results show that our approach can identify subject groups precisely.The generated abstract schema layers are very helpful for users to explore database.

  7. Extending large-scale forest inventories to assess urban forests.

    Science.gov (United States)

    Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter

    2012-03-01

    Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.

  8. Power suppression at large scales in string inflation

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, Michele [Dipartimento di Fisica ed Astronomia, Università di Bologna, via Irnerio 46, Bologna, 40126 (Italy); Downes, Sean; Dutta, Bhaskar, E-mail: mcicoli@ictp.it, E-mail: sddownes@physics.tamu.edu, E-mail: dutta@physics.tamu.edu [Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX, 77843-4242 (United States)

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  9. Large Scale and Performance tests of the ATLAS Online Software

    Institute of Scientific and Technical Information of China (English)

    Alexandrov; H.Wolters; 等

    2001-01-01

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system.It encompasses the functionality needed to configure,control and monitor the DAQ.Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal.Resular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system.Feedback is received and returned into the development process.Studies of the system.behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size,Large scale and performance tests of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software.Of particular interest were the run control state transitions in various configurations of the run control hierarchy.For the purpose of the tests,the software from other Trigger/DAQ sub-systems has been emulated.This paper presents a brief overview of the online system structure,its components and the large scale integration tests and their results.

  10. Power suppression at large scales in string inflation

    International Nuclear Information System (INIS)

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters

  11. State-of-the-art of large scale biogas plants

    International Nuclear Information System (INIS)

    A survey of the technological state of large scale biogas plants in Europe treating manure is given. 83 plants are in operation at present. Of these, 16 are centralised digestion plants. Transport costs at centralised digestion plants amounts to between 25 and 40 percent of the total operational costs. Various transport equipment is used. Most large scale digesters are CSTRs, but serial, contact, 2-step, and plug-flow digesters are also found. Construction materials are mostly steel and concrete. Mesophilic digestion is most common (56%), thermophilic digestion is used in 17% of the plants, combined mesophilic and thermophilic digestion is used in 28% of the centralised plants. Mixing of digester content is performed with gas injection, propellers, and gas-liquid displacement. Heating is carried out using external or internal heat exchangers. Heat recovery is only used in Denmark. Gas purification equipment is commonplace, but not often needed. Several plants use separation of the digested manure, often as part of a post-treatment/-purification process or for the production of 'compost'. Screens, sieve belt separaters, centrifuges and filter presses are employed. The use of biogas varies considerably. In some cases, combined heat and power stations are supplying the grid and district heating systems. Other plants use only either the electricity or heat. (au)

  12. The effective field theory of cosmological large scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco, John Joseph M. [Stanford Univ., Stanford, CA (United States); Hertzberg, Mark P. [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Senatore, Leonardo [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  13. Large-scale mapping of mutations affecting zebrafish development

    Directory of Open Access Journals (Sweden)

    Neuhauss Stephan C

    2007-01-01

    Full Text Available Abstract Background Large-scale mutagenesis screens in the zebrafish employing the mutagen ENU have isolated several hundred mutant loci that represent putative developmental control genes. In order to realize the potential of such screens, systematic genetic mapping of the mutations is necessary. Here we report on a large-scale effort to map the mutations generated in mutagenesis screening at the Max Planck Institute for Developmental Biology by genome scanning with microsatellite markers. Results We have selected a set of microsatellite markers and developed methods and scoring criteria suitable for efficient, high-throughput genome scanning. We have used these methods to successfully obtain a rough map position for 319 mutant loci from the Tübingen I mutagenesis screen and subsequent screening of the mutant collection. For 277 of these the corresponding gene is not yet identified. Mapping was successful for 80 % of the tested loci. By comparing 21 mutation and gene positions of cloned mutations we have validated the correctness of our linkage group assignments and estimated the standard error of our map positions to be approximately 6 cM. Conclusion By obtaining rough map positions for over 300 zebrafish loci with developmental phenotypes, we have generated a dataset that will be useful not only for cloning of the affected genes, but also to suggest allelism of mutations with similar phenotypes that will be identified in future screens. Furthermore this work validates the usefulness of our methodology for rapid, systematic and inexpensive microsatellite mapping of zebrafish mutations.

  14. Large scale petroleum reservoir simulation and parallel preconditioning algorithms research

    Institute of Scientific and Technical Information of China (English)

    SUN Jiachang; CAO Jianwen

    2004-01-01

    Solving large scale linear systems efficiently plays an important role in a petroleum reservoir simulator, and the key part is how to choose an effective parallel preconditioner. Properly choosing a good preconditioner has been beyond the pure algebraic field. An integrated preconditioner should include such components as physical background, characteristics of PDE mathematical model, nonlinear solving method, linear solving algorithm, domain decomposition and parallel computation. We first discuss some parallel preconditioning techniques, and then construct an integrated preconditioner, which is based on large scale distributed parallel processing, and reservoir simulation-oriented. The infrastructure of this preconditioner contains such famous preconditioning construction techniques as coarse grid correction, constraint residual correction and subspace projection correction. We essentially use multi-step means to integrate totally eight types of preconditioning components in order to give out the final preconditioner. Million-grid cell scale industrial reservoir data were tested on native high performance computers. Numerical statistics and analyses show that this preconditioner achieves satisfying parallel efficiency and acceleration effect.

  15. A Novel Approach Towards Large Scale Cross-Media Retrieval

    Institute of Scientific and Technical Information of China (English)

    Bo Lu; Guo-Ren Wang; Ye Yuan

    2012-01-01

    With the rapid development of Internet and multimedia technology,cross-media retrieval is concerned to retrieve all the related media objects with multi-modality by submitting a query media object.Unfortunately,the complexity and the heterogeneity of multi-modality have posed the following two major challenges for cross-media retrieval:1) how to construct a unified and compact model for media objects with multi-modality,2) how to improve the performance of retrieval for large scale cross-media database.In this paper,we propose a novel method which is dedicate to solving these issues to achieve effective and accurate cross-media retrieval.Firstly,a multi-modality semantic relationship graph (MSRG) is constructed using the semantic correlation amongst the media objects with multi-modality.Secondly,all the media objects in MSRG are mapped onto an isomorphic semantic space.Further,an efficient indexing MK-tree based on heterogeneous data distribution is proposed to manage the media objects within the semantic space and improve the performance of cross-media retrieval.Extensive experiments on real large scale cross-media datasets indicate that our proposal dramatically improves the accuracy and efficiency of cross-media retrieval,outperforming the existing methods significantly.

  16. What determines large scale clustering: halo mass or environment?

    CERN Document Server

    Pujol, Arnau; Jiménez, Noelia; Gaztañaga, Enrique

    2015-01-01

    We study the large scale halo bias b as a function of the environment (defined here as the background dark matter density fluctuation, d) and show that environment, and not halo mass m, is the main cause of large scale clustering. More massive haloes have a higher clustering because they live in denser regions, while low mass haloes can be found in a wide range of environments, and hence they have a lower clustering. Using a Halo Occupation Distribution (HOD) test, we can predict b(m) from b(d), but we cannot predict b(d) from b(m), which shows that environment is more fundamental for bias than mass. This has implications for the HOD model interpretation of the galaxy clustering, since when a galaxy selection is affected by environment, the standard HOD implementation fails. We show that the effects of environment are very important for colour selected samples in semi-analytic models of galaxy formation. In these cases, bias can be better recovered if we use environmental density instead of mass as the HOD va...

  17. Large-Scale Mass Distribution in the Illustris-Simulation

    CERN Document Server

    Haider, Markus; Vogelsberger, Mark; Genel, Shy; Springel, Volker; Torrey, Paul; Hernquist, Lars

    2015-01-01

    Observations at low redshifts thus far fail to account for all of the baryons expected in the Universe according to cosmological constraints. A large fraction of the baryons presumably resides in a thin and warm-hot medium between the galaxies, where they are difficult to observe due to their low densities and high temperatures. Cosmological simulations of structure formation can be used to verify this picture and provide quantitative predictions for the distribution of mass in different large-scale structure components. Here we study the distribution of baryons and dark matter at different epochs using data from the Illustris Simulation. We identify regions of different dark matter density with the primary constituents of large-scale structure, allowing us to measure mass and volume of haloes, filaments and voids. At redshift zero, we find that 49 % of the dark matter and 23 % of the baryons are within haloes. The filaments of the cosmic web host a further 45 % of the dark matter and 46 % of the baryons. The...

  18. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing

    Science.gov (United States)

    Xiao, Bo; Pradhan, Sangram K.; Santiago, Kevin C.; Rutherford, Gugu N.; Pradhan, Aswini K.

    2016-04-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing.

  19. Large scale and performance tests of the ATLAS online software

    International Nuclear Information System (INIS)

    One of the sub-systems of the Trigger/DAQ system of the future ATLAS experiment is the Online Software system. It encompasses the functionality needed to configure, control and monitor the DAQ. Its architecture is based on a component structure described in the ATLAS Trigger/DAQ technical proposal. Regular integration tests ensure its smooth operation in test beam setups during its evolutionary development towards the final ATLAS online system. Feedback is received and returned into the development process. Studies of the system behavior have been performed on a set of up to 111 PCs on a configuration which is getting closer to the final size. Large scale and performance test of the integrated system were performed on this setup with emphasis on investigating the aspects of the inter-dependence of the components and the performance of the communication software. Of particular interest were the run control state transitions in various configurations of the run control hierarchy. For the purpose of the tests, the software from other Trigger/DAQ sub-systems has been emulated. The author presents a brief overview of the online system structure, its components and the large scale integration tests and their results

  20. Halo detection via large-scale Bayesian inference

    Science.gov (United States)

    Merson, Alexander I.; Jasche, Jens; Abdalla, Filipe B.; Lahav, Ofer; Wandelt, Benjamin; Jones, D. Heath; Colless, Matthew

    2016-08-01

    We present a proof-of-concept of a novel and fully Bayesian methodology designed to detect haloes of different masses in cosmological observations subject to noise and systematic uncertainties. Our methodology combines the previously published Bayesian large-scale structure inference algorithm, HAmiltonian Density Estimation and Sampling algorithm (HADES), and a Bayesian chain rule (the Blackwell-Rao estimator), which we use to connect the inferred density field to the properties of dark matter haloes. To demonstrate the capability of our approach, we construct a realistic galaxy mock catalogue emulating the wide-area 6-degree Field Galaxy Survey, which has a median redshift of approximately 0.05. Application of HADES to the catalogue provides us with accurately inferred three-dimensional density fields and corresponding quantification of uncertainties inherent to any cosmological observation. We then use a cosmological simulation to relate the amplitude of the density field to the probability of detecting a halo with mass above a specified threshold. With this information, we can sum over the HADES density field realisations to construct maps of detection probabilities and demonstrate the validity of this approach within our mock scenario. We find that the probability of successful detection of haloes in the mock catalogue increases as a function of the signal to noise of the local galaxy observations. Our proposed methodology can easily be extended to account for more complex scientific questions and is a promising novel tool to analyse the cosmic large-scale structure in observations.

  1. Modelling large-scale halo bias using the bispectrum

    CERN Document Server

    Pollack, Jennifer E; Porciani, Cristiano

    2011-01-01

    We study the relation between the halo and matter density fields -- commonly termed bias -- in the LCDM framework. In particular, we examine the local model of biasing at quadratic order in matter density. This model is characterized by parameters b_1 and b_2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales and find that the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo power spectra and construct estimates for an effective large-scale bias. We measure the configuration dependence of the halo bispectra B_hhh and reduced bispectra Q_hhh for very large-scale k-space triangles. From this we constrain b_1 and b_2. Using the lowest-order perturbation theory, we find that for B_hhh the...

  2. Large-scale direct shear testing of geocell reinforced soil

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The tests on the shear property of geocell reinforced soils were carried out by using large-scale direct shear equipment with shear-box-dimensions of 500 mm×500 mm×400 mm (length×width×height).Three types of specimens,silty gravel soil,geoceli reinforced silty gravel soil and geoceli reinforood cement stabilizing silty gravel soil were used to investigate the shear stress-displacement behavior,the shear strength and the strengthening mechanism of geocell reinforced soils.The comparisons of large-scale shear test with triaxial compression test for the same type of soil were conducted to evaluate the influences of testing method on the shear strength as well.The test results show that the unreinforced soil and geocell reinforced soil give similar nonlinear features on the behavior of shear stress and displacement.The geocell reinforced cement stabilizing soil has a quasi-elastic characteristic in the case of normal stress coming up to 1.0 GPa.The tests with the reinforcement of geocell result in an increase of 244% in cohesion,and the tests with the geocell and the cement stabilization result in an increase of 10 times in cohesion compared with the unreinforced soil.The friction angle does not change markedly.The geocell reinforcement develops a large amount of cohesion on the shear strength of soils.

  3. Systematic renormalization of the effective theory of Large Scale Structure

    Science.gov (United States)

    Akbar Abolhasani, Ali; Mirbabayi, Mehrdad; Pajer, Enrico

    2016-05-01

    A perturbative description of Large Scale Structure is a cornerstone of our understanding of the observed distribution of matter in the universe. Renormalization is an essential and defining step to make this description physical and predictive. Here we introduce a systematic renormalization procedure, which neatly associates counterterms to the UV-sensitive diagrams order by order, as it is commonly done in quantum field theory. As a concrete example, we renormalize the one-loop power spectrum and bispectrum of both density and velocity. In addition, we present a series of results that are valid to all orders in perturbation theory. First, we show that while systematic renormalization requires temporally non-local counterterms, in practice one can use an equivalent basis made of local operators. We give an explicit prescription to generate all counterterms allowed by the symmetries. Second, we present a formal proof of the well-known general argument that the contribution of short distance perturbations to large scale density contrast δ and momentum density π(k) scale as k2 and k, respectively. Third, we demonstrate that the common practice of introducing counterterms only in the Euler equation when one is interested in correlators of δ is indeed valid to all orders.

  4. ANTITRUST ISSUES IN THE LARGE-SCALE FOOD DISTRIBUTION SECTOR

    Directory of Open Access Journals (Sweden)

    Enrico Adriano Raffaelli

    2014-12-01

    Full Text Available In light of the slow modernization of the Italian large-scale food distribution sector, of the fragmentation at national level, of the significant roles of the cooperatives at local level and of the alliances between food retail chains, the ICA during the recent years has developed a strong interest in this sector.After having analyzed the peculiarities of the Italian large-scale food distribution sector, this article shows the recent approach taken by the ICA toward the main antitrust issues in this sector.In the analysis of such issues, mainly the contractual relations between the GDO retailers and their suppliers, the introduction of Article 62 of Law no. 27 dated 24th March 2012 is crucial, because, by facilitating and encouraging complaints by the interested parties, it should allow the developing of normal competitive dynamics within the food distribution sector, where companies should be free to enter the market using the tools at their disposal, without undue restrictions.

  5. Nearly incompressible fluids: hydrodynamics and large scale inhomogeneity.

    Science.gov (United States)

    Hunana, P; Zank, G P; Shaikh, D

    2006-08-01

    A system of hydrodynamic equations in the presence of large-scale inhomogeneities for a high plasma beta solar wind is derived. The theory is derived under the assumption of low turbulent Mach number and is developed for the flows where the usual incompressible description is not satisfactory and a full compressible treatment is too complex for any analytical studies. When the effects of compressibility are incorporated only weakly, a new description, referred to as "nearly incompressible hydrodynamics," is obtained. The nearly incompressible theory, was originally applied to homogeneous flows. However, large-scale gradients in density, pressure, temperature, etc., are typical in the solar wind and it was unclear how inhomogeneities would affect the usual incompressible and nearly incompressible descriptions. In the homogeneous case, the lowest order expansion of the fully compressible equations leads to the usual incompressible equations, followed at higher orders by the nearly incompressible equations, as introduced by Zank and Matthaeus. With this work we show that the inclusion of large-scale inhomogeneities (in this case time-independent and radially symmetric background solar wind) modifies the leading-order incompressible description of solar wind flow. We find, for example, that the divergence of velocity fluctuations is nonsolenoidal and that density fluctuations can be described to leading order as a passive scalar. Locally (for small lengthscales), this system of equations converges to the usual incompressible equations and we therefore use the term "locally incompressible" to describe the equations. This term should be distinguished from the term "nearly incompressible," which is reserved for higher-order corrections. Furthermore, we find that density fluctuations scale with Mach number linearly, in contrast to the original homogeneous nearly incompressible theory, in which density fluctuations scale with the square of Mach number. Inhomogeneous nearly

  6. Modeling dynamic functional information flows on large-scale brain networks.

    Science.gov (United States)

    Lv, Peili; Guo, Lei; Hu, Xintao; Li, Xiang; Jin, Changfeng; Han, Junwei; Li, Lingjiang; Liu, Tianming

    2013-01-01

    Growing evidence from the functional neuroimaging field suggests that human brain functions are realized via dynamic functional interactions on large-scale structural networks. Even in resting state, functional brain networks exhibit remarkable temporal dynamics. However, it has been rarely explored to computationally model such dynamic functional information flows on large-scale brain networks. In this paper, we present a novel computational framework to explore this problem using multimodal resting state fMRI (R-fMRI) and diffusion tensor imaging (DTI) data. Basically, recent literature reports including our own studies have demonstrated that the resting state brain networks dynamically undergo a set of distinct brain states. Within each quasi-stable state, functional information flows from one set of structural brain nodes to other sets of nodes, which is analogous to the message package routing on the Internet from the source node to the destination. Therefore, based on the large-scale structural brain networks constructed from DTI data, we employ a dynamic programming strategy to infer functional information transition routines on structural networks, based on which hub routers that most frequently participate in these routines are identified. It is interesting that a majority of those hub routers are located within the default mode network (DMN), revealing a possible mechanism of the critical functional hub roles played by the DMN in resting state. Also, application of this framework on a post trauma stress disorder (PTSD) dataset demonstrated interesting difference in hub router distributions between PTSD patients and healthy controls. PMID:24579202

  7. Rolling up of Large-scale Laminar Vortex Ring from Synthetic Jet Impinging onto a Wall

    Science.gov (United States)

    Xu, Yang; Pan, Chong; Wang, Jinjun; Flow Control Lab Team

    2015-11-01

    Vortex ring impinging onto a wall exhibits a wide range of interesting behaviors. The present work devotes to an experimental investigation of a series of small-scale vortex rings impinging onto a wall. These laminar vortex rings were generated by a piston-cylinder driven synthetic jet in a water tank. Laser Induced Fluorescence (LIF) and Particle Image Velocimetry (PIV) were used for flow visualization/quantification. A special scenario of vortical dynamic was found for the first time: a large-scale laminar vortex ring is formed above the wall, on the outboard side of the jet. This large-scale structure is stable in topology pattern, and continuously grows in strength and size along time, thus dominating dynamics of near wall flow. To quantify its spatial/temporal characteristics, Finite-Time Lyapunov Exponent (FTLE) fields were calculated from PIV velocity fields. It is shown that the flow pattern revealed by FTLE fields is similar to the visualization. The size of this large-scale vortex ring can be up to one-order larger than the jet vortices, and its rolling-up speed and entrainment strength was correlated to constant vorticity flux issued from the jet. This work was supported by the National Natural Science Foundation of China (Grants No.11202015 and 11327202).

  8. Debottlenecking recombinant protein production in Bacillus megaterium under large-scale conditions--targeted precursor feeding designed from metabolomics.

    Science.gov (United States)

    Korneli, Claudia; Bolten, Christoph Josef; Godard, Thibault; Franco-Lara, Ezequiel; Wittmann, Christoph

    2012-06-01

    In the present work the impact of large production scale was investigated for Bacillus megaterium expressing green fluorescent protein (GFP). Specifically designed scale-down studies, mimicking the intermittent and continuous nutrient supply of large- and small-scale processes, were carried out for this purpose. The recombinant strain revealed a 40% reduced GFP yield for the large-scale conditions. In line with extended carbon loss via formation of acetate and carbon dioxide, this indicated obvious limitations in the underlying metabolism of B. megaterium under the large-scale conditions. Quantitative analysis of intracellular amino acids via validated fast filtration protocols revealed that their level strongly differed between the two scenarios. During cultivation in large-scale set-up, the availability of most amino acids, serving as key building blocks of the recombinant protein, was substantially reduced. This was most pronounced for tryptophan, aspartate, histidine, glutamine, and lysine. In contrast alanine was increased, probably related to a bottleneck at the level of pyruvate which also triggered acetate overflow metabolism. The pre-cursor quantifications could then be exploited to verify the presumed bottlenecks and improve recombinant protein production under large-scale conditions. Addition of only 5 mM tryptophan, aspartate, histidine, glutamine, and lysine to the feed solution increased the GFP yield by 100%. This rational concept of driving the lab scale productivity of recombinant microorganisms under suboptimal feeding conditions emulating large scale can easily be extended to other processes and production hosts. PMID:22252649

  9. Anisotropic shrinkage of insect air sacs revealed in vivo by X-ray microtomography

    Science.gov (United States)

    Xu, Liang; Chen, Rongchang; Du, Guohao; Yang, Yiming; Wang, Feixiang; Deng, Biao; Xie, Honglan; Xiao, Tiqiao

    2016-09-01

    Air sacs are thought to be the bellows for insect respiration. However, their exact mechanism of action as a bellows remains unclear. A direct way to investigate this problem is in vivo observation of the changes in their three-dimensional structures. Therefore, four-dimensional X-ray phase contrast microtomography is employed to solve this puzzle. Quantitative analysis of three-dimensional image series reveals that the compression of the air sac during respiration in bell crickets exhibits obvious anisotropic characteristics both longitudinally and transversely. Volumetric changes of the tracheal trunks in the prothorax further strengthen the evidence of this finding. As a result, we conclude that the shrinkage and expansion of the insect air sac is anisotropic, contrary to the hypothesis of isotropy, thereby providing new knowledge for further research on the insect respiratory system.

  10. Anisotropic shrinkage of insect air sacs revealed in vivo by X-ray microtomography

    Science.gov (United States)

    Xu, Liang; Chen, Rongchang; Du, Guohao; Yang, Yiming; Wang, Feixiang; Deng, Biao; Xie, Honglan; Xiao, Tiqiao

    2016-01-01

    Air sacs are thought to be the bellows for insect respiration. However, their exact mechanism of action as a bellows remains unclear. A direct way to investigate this problem is in vivo observation of the changes in their three-dimensional structures. Therefore, four-dimensional X-ray phase contrast microtomography is employed to solve this puzzle. Quantitative analysis of three-dimensional image series reveals that the compression of the air sac during respiration in bell crickets exhibits obvious anisotropic characteristics both longitudinally and transversely. Volumetric changes of the tracheal trunks in the prothorax further strengthen the evidence of this finding. As a result, we conclude that the shrinkage and expansion of the insect air sac is anisotropic, contrary to the hypothesis of isotropy, thereby providing new knowledge for further research on the insect respiratory system. PMID:27580585

  11. Climatological context for large-scale coral bleaching

    Science.gov (United States)

    Barton, A. D.; Casey, K. S.

    2005-12-01

    Large-scale coral bleaching was first observed in 1979 and has occurred throughout virtually all of the tropics since that time. Severe bleaching may result in the loss of live coral and in a decline of the integrity of the impacted coral reef ecosystem. Despite the extensive scientific research and increased public awareness of coral bleaching, uncertainties remain about the past and future of large-scale coral bleaching. In order to reduce these uncertainties and place large-scale coral bleaching in the longer-term climatological context, specific criteria and methods for using historical sea surface temperature (SST) data to examine coral bleaching-related thermal conditions are proposed by analyzing three, 132 year SST reconstructions: ERSST, HadISST1, and GISST2.3b. These methodologies are applied to case studies at Discovery Bay, Jamaica (77.27°W, 18.45°N), Sombrero Reef, Florida, USA (81.11°W, 24.63°N), Academy Bay, Galápagos, Ecuador (90.31°W, 0.74°S), Pearl and Hermes Reef, Northwest Hawaiian Islands, USA (175.83°W, 27.83°N), Midway Island, Northwest Hawaiian Islands, USA (177.37°W, 28.25°N), Davies Reef, Australia (147.68°E, 18.83°S), and North Male Atoll, Maldives (73.35°E, 4.70°N). The results of this study show that (1) The historical SST data provide a useful long-term record of thermal conditions in reef ecosystems, giving important insight into the thermal history of coral reefs and (2) While coral bleaching and anomalously warm SSTs have occurred over much of the world in recent decades, case studies in the Caribbean, Northwest Hawaiian Islands, and parts of other regions such as the Great Barrier Reef exhibited SST conditions and cumulative thermal stress prior to 1979 that were comparable to those conditions observed during the strong, frequent coral bleaching events since 1979. This climatological context and knowledge of past environmental conditions in reef ecosystems may foster a better understanding of how coral reefs will

  12. Large-Scale Graphene Film Deposition for Monolithic Device Fabrication

    Science.gov (United States)

    Al-shurman, Khaled

    Since 1958, the concept of integrated circuit (IC) has achieved great technological developments and helped in shrinking electronic devices. Nowadays, an IC consists of more than a million of compacted transistors. The majority of current ICs use silicon as a semiconductor material. According to Moore's law, the number of transistors built-in on a microchip can be double every two years. However, silicon device manufacturing reaches its physical limits. To explain, there is a new trend to shrinking circuitry to seven nanometers where a lot of unknown quantum effects such as tunneling effect can not be controlled. Hence, there is an urgent need for a new platform material to replace Si. Graphene is considered a promising material with enormous potential applications in many electronic and optoelectronics devices due to its superior properties. There are several techniques to produce graphene films. Among these techniques, chemical vapor deposition (CVD) offers a very convenient method to fabricate films for large-scale graphene films. Though CVD method is suitable for large area growth of graphene, the need for transferring a graphene film to silicon-based substrates is required. Furthermore, the graphene films thus achieved are, in fact, not single crystalline. Also, graphene fabrication utilizing Cu and Ni at high growth temperature contaminates the substrate that holds Si CMOS circuitry and CVD chamber as well. So, lowering the deposition temperature is another technological milestone for the successful adoption of graphene in integrated circuits fabrication. In this research, direct large-scale graphene film fabrication on silicon based platform (i.e. SiO2 and Si3N4) at low temperature was achieved. With a focus on low-temperature graphene growth, hot-filament chemical vapor deposition (HF-CVD) was utilized to synthesize graphene film using 200 nm thick nickel film. Raman spectroscopy was utilized to examine graphene formation on the bottom side of the Ni film

  13. Statistics of Caustics in Large-Scale Structure Formation

    Science.gov (United States)

    Feldbrugge, Job L.; Hidding, Johan; van de Weygaert, Rien

    2016-10-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zel'dovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  14. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  15. On the Hyperbolicity of Large-Scale Networks

    CERN Document Server

    Kennedy, W Sean; Saniee, Iraj

    2013-01-01

    Through detailed analysis of scores of publicly available data sets corresponding to a wide range of large-scale networks, from communication and road networks to various forms of social networks, we explore a little-studied geometric characteristic of real-life networks, namely their hyperbolicity. In smooth geometry, hyperbolicity captures the notion of negative curvature; within the more abstract context of metric spaces, it can be generalized as d-hyperbolicity. This generalized definition can be applied to graphs, which we explore in this report. We provide strong evidence that communication and social networks exhibit this fundamental property, and through extensive computations we quantify the degree of hyperbolicity of each network in comparison to its diameter. By contrast, and as evidence of the validity of the methodology, applying the same methods to the road networks shows that they are not hyperbolic, which is as expected. Finally, we present practical computational means for detection of hyperb...

  16. Large Scale 3D Image Reconstruction in Optical Interferometry

    CERN Document Server

    Schutz, Antony; Mary, David; Thiébaut, Eric; Soulez, Ferréol

    2015-01-01

    Astronomical optical interferometers (OI) sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid atmospheric perturbations, the phases of the complex Fourier samples (visibilities) cannot be directly exploited , and instead linear relationships between the phases are used (phase closures and differential phases). Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic OI instruments are now paving the way to multiwavelength imaging. This paper presents the derivation of a spatio-spectral ("3D") image reconstruction algorithm called PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm is able to solve large scale problems. It relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also from differential phase...

  17. Lightweight computational steering of very large scale molecular dynamics simulations

    International Nuclear Information System (INIS)

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages

  18. Large-scale structure non-Gaussianities with modal methods

    Science.gov (United States)

    Schmittfull, Marcel

    2016-10-01

    Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).

  19. Hashkat: Large-scale simulations of online social networks

    CERN Document Server

    Ryczko, Kevin; Buhagiar, Nicholas; Tamblyn, Isaac

    2016-01-01

    Hashkat (http://hashkat.org) is a free, open source, agent based simulation software package designed to simulate large-scale online social networks (e.g. Twitter, Facebook, LinkedIn, etc). It allows for dynamic agent generation, edge creation, and information propagation. The purpose of hashkat is to study the growth of online social networks and how information flows within them. Like real life online social networks, hashkat incorporates user relationships, information diffusion, and trending topics. Hashkat was implemented in C++, and was designed with extensibility in mind. The software includes Shell and Python scripts for easy installation and usability. In this report, we describe all of the algorithms and features integrated into hashkat before moving on to example use cases. In general, hashkat can be used to understand the underlying topology of social networks, validate sampling methods of such networks, develop business strategy for advertising on online social networks, and test new features of ...

  20. Automatic Installation and Configuration for Large Scale Farms

    CERN Document Server

    Novák, J

    2005-01-01

    Since the early appearance of commodity hardware, the utilization of computers rose rapidly, and they became essential in all areas of life. Soon it was realized that nodes are able to work cooperatively, in order to solve new, more complex tasks. This conception got materialized in coherent aggregations of computers called farms and clusters. Collective application of nodes, being efficient and economical, was adopted in education, research and industry before long. But maintainance, especially in large scale, appeared as a problem to be resolved. New challenges needed new methods and tools. Development work has been started to build farm management applications and frameworks. In the first part of the thesis, these systems are introduced. After a general description of the matter, a comparative analysis of different approaches and tools illustrates the practical aspects of the theoretical discussion. CERN, the European Organization of Nuclear Research is the largest Particle Physics laboratory in the world....

  1. Towards online multiresolution community detection in large-scale networks.

    Directory of Open Access Journals (Sweden)

    Jianbin Huang

    Full Text Available The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks.

  2. An optimal design methodology for large-scale gas liquefaction

    International Nuclear Information System (INIS)

    Highlights: ► Configuration selection and parametric optimization carried out simultaneously for gas liquefaction systems. ► Effective Heat Transfer Factor proposed to indicate the performance of heat exchanger networks. ► Relatively high exergy efficiency of liquefaction process achievable under some general assumptions. -- Abstract: This paper presents an optimization methodology for thermodynamic design of large scale gas liquefaction systems. Such a methodology enables configuration selection and parametric optimization to be implemented simultaneously. Exergy efficiency and genetic algorithm have been chosen as an evaluation index and an evaluation criterion, respectively. The methodology has been applied to the design of expander cycle based liquefaction processes. Liquefaction processes of hydrogen, methane and nitrogen are selected as case studies and the simulation results show that relatively high exergy efficiencies (52% for hydrogen and 58% for methane and nitrogen) are achievable based on very general consumptions.

  3. Alignment of quasar polarizations with large-scale structures

    CERN Document Server

    Hutsemékers, Damien; Pelgrims, Vincent; Sluse, Dominique

    2014-01-01

    We have measured the optical linear polarization of quasars belonging to Gpc-scale quasar groups at redshift z ~ 1.3. Out of 93 quasars observed, 19 are significantly polarized. We found that quasar polarization vectors are either parallel or perpendicular to the directions of the large-scale structures to which they belong. Statistical tests indicate that the probability that this effect can be attributed to randomly oriented polarization vectors is of the order of 1%. We also found that quasars with polarization perpendicular to the host structure preferentially have large emission line widths while objects with polarization parallel to the host structure preferentially have small emission line widths. Considering that quasar polarization is usually either parallel or perpendicular to the accretion disk axis depending on the inclination with respect to the line of sight, and that broader emission lines originate from quasars seen at higher inclinations, we conclude that quasar spin axes are likely parallel ...

  4. Planck intermediate results. XLII. Large-scale Galactic magnetic fields

    CERN Document Server

    Adam, R; Alves, M I R; Ashdown, M; Aumont, J; Baccigalupi, C; Banday, A J; Barreiro, R B; Bartolo, N; Battaner, E; Benabed, K; Benoit-Lévy, A; Bernard, J -P; Bersanelli, M; Bielewicz, P; Bonavera, L; Bond, J R; Borrill, J; Bouchet, F R; Boulanger, F; Bucher, M; Burigana, C; Butler, R C; Calabrese, E; Cardoso, J -F; Catalano, A; Chiang, H C; Christensen, P R; Colombo, L P L; Combet, C; Couchot, F; Crill, B P; Curto, A; Cuttaia, F; Danese, L; Davis, R J; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Dickinson, C; Diego, J M; Dolag, K; Doré, O; Ducout, A; Dupac, X; Elsner, F; Enßlin, T A; Eriksen, H K; Ferrière, K; Finelli, F; Forni, O; Frailis, M; Fraisse, A A; Franceschi, E; Galeotta, S; Ganga, K; Ghosh, T; Giard, M; Gjerløw, E; González-Nuevo, J; Górski, K M; Gregorio, A; Gruppuso, A; Gudmundsson, J E; Hansen, F K; Harrison, D L; Hernández-Monteagudo, C; Herranz, D; Hildebrandt, S R; Hobson, M; Hornstrup, A; Hurier, G; Jaffe, A H; Jaffe, T R; Jones, W C; Juvela, M; Keihänen, E; Keskitalo, R; Kisner, T S; Knoche, J; Kunz, M; Kurki-Suonio, H; Lamarre, J -M; Lasenby, A; Lattanzi, M; Lawrence, C R; Leahy, J P; Leonardi, R; Levrier, F; Lilje, P B; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Macías-Pérez, J F; Maggio, G; Maino, D; Mandolesi, N; Mangilli, A; Maris, M; Martin, P G; Masi, S; Melchiorri, A; Mennella, A; Migliaccio, M; Miville-Deschênes, M -A; Moneti, A; Montier, L; Morgante, G; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Nørgaard-Nielsen, H U; Oppermann, N; Orlando, E; Pagano, L; Pajot, F; Paladini, R; Paoletti, D; Pasian, F; Perotto, L; Pettorino, V; Piacentini, F; Piat, M; Pierpaoli, E; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Pratt, G W; Prunet, S; Puget, J -L; Rachen, J P; Reinecke, M; Remazeilles, M; Renault, C; Renzi, A; Ristorcelli, I; Rocha, G; Rossetti, M; Roudier, G; Rubiño-Martín, J A; Rusholme, B; Sandri, M; Santos, D; Savelainen, M; Scott, D; Spencer, L D; Stolyarov, V; Stompor, R; Strong, A W; Sudiwala, R; Sunyaev, R; Suur-Uski, A -S; Sygnet, J -F; Tauber, J A; Terenzi, L; Toffolatti, L; Tomasi, M; Tristram, M; Tucci, M; Valenziano, L; Valiviita, J; Van Tent, B; Vielva, P; Villa, F; Wade, L A; Wandelt, B D; Wehus, I K; Yvon, D; Zacchei, A; Zonca, A

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature were largely constrained by synchrotron emission and Faraday rotation measures. We select three different but representative models and compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering in the magnetic fields is sensitive to these systematic uncertainties. We then compare the resulting simulated emission to the observed dust emission and find that the dust predictions do not match the morphology in the Planck data, particularly the vertical profile in latitude. We show how the dust data can then be used to further improve these magnetic field models, particu...

  5. Large scale protein separations: engineering aspects of chromatography.

    Science.gov (United States)

    Chisti, Y; Moo-Young, M

    1990-01-01

    The engineering considerations common to large scale chromatographic purification of proteins are reviewed. A discussion of the industrial chromatography fundamentals is followed by aspects which affect the scale of separation. The separation column geometry, the effect of the main operational parameters on separation performance, and the physical characteristics of column packing are treated. Throughout, the emphasis is on ion exchange and size exclusion techniques which together constitute the major portion of commercial chromatographic protein purifications. In all cases, the state of current technology is examined and areas in need of further development are noted. The physico-chemical advances now underway in chromatographic separation of biopolymers would ensure a substantially enhanced role for these techniques in industrial production of products of new biotechnology.

  6. Large scale oil lease automation and electronic custody transfer

    International Nuclear Information System (INIS)

    Typically, oil field production operations have only been automated at fields with long term production profiles and enhanced recovery. The automation generally consists of monitoring and control at the wellhead and centralized facilities. However, Union Pacific Resources Co. (UPRC) has successfully implemented a large scale automation program for rapid-decline primary recovery Austin Chalk wells where purchasers buy and transport oil from each individual wellsite. This project has resulted in two significant benefits. First, operators are using the system to re-engineer their work processes. Second, an inter-company team created a new electronic custody transfer method. This paper will describe: the progression of the company's automation objectives in the area; the field operator's interaction with the system, and the related benefits; the research and development of the new electronic custody transfer method

  7. Cosmic Ray Acceleration during Large Scale Structure Formation

    CERN Document Server

    Blasi, P

    2004-01-01

    Clusters of galaxies are storage rooms of cosmic rays. They confine the hadronic component of cosmic rays over cosmological time scales due to diffusion, and the electron component due to energy losses. Hadronic cosmic rays can be accelerated during the process of structure formation, because of the supersonic motion of gas in the potential wells created by dark matter. At the shock waves that result from this motion, charged particles can be energized through the first order Fermi process. After discussing the most important evidences for non-thermal phenomena in large scale structures, we describe in some detail the main issues related to the acceleration of particles at these shock waves, emphasizing the possible role of the dynamical backreaction of the accelerated particles on the plasmas involved.

  8. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    Transport models are becoming more and more disaggregate to facilitate a realistic representation of individuals and their travel patterns. In line with this development, the PhD study focuses on facilitating the deployment of traffic assignment models in fully disaggregate activity-based model...... focuses on large-scale applications and contributes with methods to actualise the true potential of disaggregate models. To achieve this target, contributions are given to several components of traffic assignment modelling, by (i) enabling the utilisation of the increasingly available data sources......-perceptions in the choice set generation for complex multi-modal networks, and (iv) addressing the difficulty of choice set generation by making available a theoretical framework, and corresponding operational solution methods, which consistently distinguishes between used and unused paths. The availability of data...

  9. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  10. Optimization of large scale food production using Lean Manufacturing principles

    DEFF Research Database (Denmark)

    Engelund, Eva Høy; Friis, Alan; Breum, Gitte

    2009-01-01

    This paper discusses how the production principles of Lean Manufacturing (Lean) can be applied in a large-scale meal production. Lean principles are briefly presented, followed by a field study of how a kitchen at a Danish hospital has implemented Lean in the daily production. In the kitchen......, the main purposes of implementing Lean were to rationalise internal procedures and to increase production efficiency following a change from cook-serve production to cook-chill, and a reduction in the number of employees. It was also important that product quality and working environment should...... not be negatively affected by the rationalisation of production procedures. The field study shows that Lean principles can be applied in meal production and can result in increased production efficiency and systematic improvement of product quality without negative effects on the working environment. The results...

  11. Statistics of Caustics in Large-Scale Structure Formation

    CERN Document Server

    Feldbrugge, Job; van de Weygaert, Rien

    2014-01-01

    The cosmic web is a complex spatial pattern of walls, filaments, cluster nodes and underdense void regions. It emerged through gravitational amplification from the Gaussian primordial density field. Here we infer analytical expressions for the spatial statistics of caustics in the evolving large-scale mass distribution. In our analysis, following the quasi-linear Zeldovich formalism and confined to the 1D and 2D situation, we compute number density and correlation properties of caustics in cosmic density fields that evolve from Gaussian primordial conditions. The analysis can be straightforwardly extended to the 3D situation. We moreover, are currently extending the approach to the non-linear regime of structure formation by including higher order Lagrangian approximations and Lagrangian effective field theory.

  12. High pressure sheet metal forming of large scale body structures

    Energy Technology Data Exchange (ETDEWEB)

    Trompeter, M.; Krux, R.; Homberg, W.; Kleiner, M. [Dortmund Univ. (Germany). Inst. of Forming Technology and Lightweight Construction

    2005-07-01

    An important trend in the automotive industry is the weight reduction of car bodies by lightweight construction. One approach to realise lightweight structures is the use of load optimised sheet metal parts (e.g. tailored blanks), especially for crash relevant car body structures. To form such parts which are mostly complex and primarily made of high strength steels, the use of working media based forming processes is favorable. The paper presents the manufacturing of a large scale structural component made of tailor rolled blanks (TRB) by high pressure sheet metal forming (HBU). The paper focuses mainly on the tooling system, which is integrated into a specific 100 MN hydroform press at the IUL. The HBU tool basically consists of a multipoint blankholder, a specially designed flange draw-in sensor, which is necessary to determine the material flow, and a sealing system. Furthermore, the paper presents a strategy for an effective closed loop flange draw-in control. (orig.)

  13. Split Bregman method for large scale fused Lasso

    CERN Document Server

    Ye, Gui-Bo

    2010-01-01

    rdering of regression or classification coefficients occurs in many real-world applications. Fused Lasso exploits this ordering by explicitly regularizing the differences between neighboring coefficients through an $\\ell_1$ norm regularizer. However, due to nonseparability and nonsmoothness of the regularization term, solving the fused Lasso problem is computationally demanding. Existing solvers can only deal with problems of small or medium size, or a special case of the fused Lasso problem in which the predictor matrix is identity matrix. In this paper, we propose an iterative algorithm based on split Bregman method to solve a class of large-scale fused Lasso problems, including a generalized fused Lasso and a fused Lasso support vector classifier. We derive our algorithm using augmented Lagrangian method and prove its convergence properties. The performance of our method is tested on both artificial data and real-world applications including proteomic data from mass spectrometry and genomic data from array...

  14. SOLVING TRUST REGION PROBLEM IN LARGE SCALE OPTIMIZATION

    Institute of Scientific and Technical Information of China (English)

    Bing-sheng He

    2000-01-01

    This paper presents a new method for solving the basic problem in the “model trust region” approach to large scale minimization: Compute a vector x such that 1/2xTHx + cTx = min, subject to the constraint ‖x‖2≤a. The method is a combination of the CG method and a projection and contraction (PC) method. The first (CG) method with x0 = 0 as the start point either directly offers a solution of the problem, or--as soon as the norm of the iterate greater than a, --it gives a suitable starting point and a favourable choice of a crucial scaling parameter in the second (PC) method. Some numerical examples are given, which indicate that the method is applicable.

  15. Solar cycle, solar rotation and large-scale circulation

    International Nuclear Information System (INIS)

    The Glossary is designed to be a technical dictionary that will provide solar workers of various specialties, students, other astronomers and theoreticians with concise information on the nature and the properties of phenomena of solar and solar-terrestrial physics. Each term, or group of related terms, is given a concise phenomenological and quantitative description, including the relationship to other phenomena and an interpretation in terms of physical processes. The references are intended to lead the non-specialist reader into the literature. This section deals with: solar (activity) cycle; Hale cycle; long-term activity variations; dynamos; differential rotation; rotation of the convective zone; Carrington rotation; oblateness; meridional flow; and giant cells or large-scale circulation. (B.R.H.)

  16. Theoretical expectations for bulk flows in large-scale surveys

    Science.gov (United States)

    Feldman, Hume A.; Watkins, Richard

    1994-01-01

    We calculate the theoretical expectation for the bulk motion of a large-scale survey of the type recently carried out by Lauer and Postman. Included are the effects of survey geometry, errors in the distance measurements, clustering properties of the sample, and different assumed power spectra. We considered the power spectrum calculated from the Infrared Astronomy Satellite (IRAS)-QDOT survey, as well as spectra from hot + cold and standard cold dark matter models. We find that measurement uncertainty, sparse sampling, and clustering can lead to a much larger expectation for the bulk motion of a cluster sample than for the volume as a whole. However, our results suggest that the expected bulk motion is still inconsistent with that reported by Lauer and Postman at the 95%-97% confidence level.

  17. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  18. Large-scale comparative visualisation of sets of multidimensional data

    CERN Document Server

    Vohl, Dany; Fluke, Christopher J; Poudel, Govinda; Georgiou-Karistianis, Nellie; Hassan, Amr H; Benovitski, Yuri; Wong, Tsz Ho; Kaluza, Owen; Nguyen, Toan D; Bonnington, C Paul

    2016-01-01

    We present encube $-$ a qualitative, quantitative and comparative visualisation and analysis system, with application to high-resolution, immersive three-dimensional environments and desktop displays. encube extends previous comparative visualisation systems by considering: 1) the integration of comparative visualisation and analysis into a unified system; 2) the documentation of the discovery process; and 3) an approach that enables scientists to continue the research process once back at their desktop. Our solution enables tablets, smartphones or laptops to be used as interaction units for manipulating, organising, and querying data. We highlight the modularity of encube, allowing additional functionalities to be included as required. Additionally, our approach supports a high level of collaboration within the physical environment. We show how our implementation of encube operates in a large-scale, hybrid visualisation and supercomputing environment using the CAVE2 at Monash University, and on a local deskt...

  19. Large-scale Structure in f(T) Gravity

    CERN Document Server

    Li, Baojiu; Barrow, John D

    2011-01-01

    In this work we study the cosmology of the general f(T) gravity theory. We express the modified Einstein equations using covariant quantities, and derive the gauge-invariant perturbation equations in covariant form. We consider a specific choice of f(T), designed to explain the observed late-time accelerating cosmic expansion without including an exotic dark energy component. Our numerical solution shows that the extra degree of freedom of such f(T) gravity models generally decays as one goes to smaller scales, and consequently its effects on scales such as galaxies and galaxies clusters are small. But on large scales, this degree of freedom can produce large deviations from the standard LCDM scenario, leading to severe constraints on the f(T) gravity models as an explanation to the cosmic acceleration.

  20. Preliminary design study of a large scale graphite oxidation loop

    International Nuclear Information System (INIS)

    A preliminary design study of a large scale graphite oxidation loop was performed in order to assess feasibility and to estimate capital costs. The nominal design operates at 50 atmospheres helium and 1800 F with a graphite specimen 30 inches long and 10 inches in diameter. It was determined that a simple single walled design was not practical at this time because of a lack of commercially available thick walled high temperature alloys. Two alternative concepts, at reduced operating pressure, were investigated. Both were found to be readily fabricable to operate at 1800 F and capital cost estimates for these are included. A design concept, which is outside the scope of this study, was briefly considered