WorldWideScience

Sample records for large scale variability

  1. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    Science.gov (United States)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach

  2. Intelligent control for large-scale variable speed variable pitch wind turbines

    Institute of Scientific and Technical Information of China (English)

    Xinfang ZHANG; Daping XU; Yibing LIU

    2004-01-01

    Large-scale wind turbine generator systems have strong nonlinear multivariable characteristics with many uncertain factors and disturbances.Automatic control is crucial for the efficiency and reliability of wind turbines.On the basis of simplified and proper model of variable speed variable pitch wind turbines,the effective wind speed is estimated using extended Kalman filter.Intelligent control schemes proposed in the paper include two loops which operate in synchronism with each other.At below-rated wind speed,the inner loop adopts adaptive fuzzy control based on variable universe for generator torque regulation to realize maximum wind energy capture.At above-rated wind speed, a controller based on least square support vector machine is proposed to adjust pitch angle and keep rated output power.The simulation shows the effectiveness of the intelligent control.

  3. Characterizing Temperature Variability and Associated Large Scale Meteorological Patterns Across South America

    Science.gov (United States)

    Detzer, J.; Loikith, P. C.; Mechoso, C. R.; Barkhordarian, A.; Lee, H.

    2017-12-01

    South America's climate varies considerably owing to its large geographic range and diverse topographical features. Spanning the tropics to the mid-latitudes and from high peaks to tropical rainforest, the continent experiences an array of climate and weather patterns. Due to this considerable spatial extent, assessing temperature variability at the continent scale is particularly challenging. It is well documented in the literature that temperatures have been increasing across portions of South America in recent decades, and while there have been many studies that have focused on precipitation variability and change, temperature has received less scientific attention. Therefore, a more thorough understanding of the drivers of temperature variability is critical for interpreting future change. First, k-means cluster analysis is used to identify four primary modes of temperature variability across the continent, stratified by season. Next, composites of large scale meteorological patterns (LSMPs) are calculated for months assigned to each cluster. Initial results suggest that LSMPs, defined using meteorological variables such as sea level pressure (SLP), geopotential height, and wind, are able to identify synoptic scale mechanisms important for driving temperature variability at the monthly scale. Some LSMPs indicate a relationship with known recurrent modes of climate variability. For example, composites of geopotential height suggest that the Southern Annular Mode is an important, but not necessarily dominant, component of temperature variability over southern South America. This work will be extended to assess the drivers of temperature extremes across South America.

  4. Variability in large-scale wind power generation: Variability in large-scale wind power generation

    Energy Technology Data Exchange (ETDEWEB)

    Kiviluoma, Juha [VTT Technical Research Centre of Finland, Espoo Finland; Holttinen, Hannele [VTT Technical Research Centre of Finland, Espoo Finland; Weir, David [Energy Department, Norwegian Water Resources and Energy Directorate, Oslo Norway; Scharff, Richard [KTH Royal Institute of Technology, Electric Power Systems, Stockholm Sweden; Söder, Lennart [Royal Institute of Technology, Electric Power Systems, Stockholm Sweden; Menemenlis, Nickie [Institut de recherche Hydro-Québec, Montreal Canada; Cutululis, Nicolaos A. [DTU, Wind Energy, Roskilde Denmark; Danti Lopez, Irene [Electricity Research Centre, University College Dublin, Dublin Ireland; Lannoye, Eamonn [Electric Power Research Institute, Palo Alto California USA; Estanqueiro, Ana [LNEG, Laboratorio Nacional de Energia e Geologia, UESEO, Lisbon Spain; Gomez-Lazaro, Emilio [Renewable Energy Research Institute and DIEEAC/EDII-AB, Castilla-La Mancha University, Albacete Spain; Zhang, Qin [State Grid Corporation of China, Beijing China; Bai, Jianhua [State Grid Energy Research Institute Beijing, Beijing China; Wan, Yih-Huei [National Renewable Energy Laboratory, Transmission and Grid Integration Group, Golden Colorado USA; Milligan, Michael [National Renewable Energy Laboratory, Transmission and Grid Integration Group, Golden Colorado USA

    2015-10-25

    The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net load events. The comparison shows regions with low variability (Sweden, Spain and Germany), medium variability (Portugal, Ireland, Finland and Denmark) and regions with higher variability (Quebec, Bonneville Power Administration and Electric Reliability Council of Texas in North America; Gansu, Jilin and Liaoning in China; and Norway and offshore wind power in Denmark). For regions with low variability, the maximum 1 h wind ramps are below 10% of nominal capacity, and for regions with high variability, they may be close to 30%. Wind power variability is mainly explained by the extent of geographical spread, but also higher capacity factor causes higher variability. It was also shown how wind power ramps are autocorrelated and dependent on the operating output level. When wind power was concentrated in smaller area, there were outliers with high changes in wind output, which were not present in large areas with well-dispersed wind power.

  5. European Wintertime Windstorms and its Links to Large-Scale Variability Modes

    Science.gov (United States)

    Befort, D. J.; Wild, S.; Walz, M. A.; Knight, J. R.; Lockwood, J. F.; Thornton, H. E.; Hermanson, L.; Bett, P.; Weisheimer, A.; Leckebusch, G. C.

    2017-12-01

    Winter storms associated with extreme wind speeds and heavy precipitation are the most costly natural hazard in several European countries. Improved understanding and seasonal forecast skill of winter storms will thus help society, policy-makers and (re-) insurance industry to be better prepared for such events. We firstly assess the ability to represent extra-tropical windstorms over the Northern Hemisphere of three seasonal forecast ensemble suites: ECMWF System3, ECMWF System4 and GloSea5. Our results show significant skill for inter-annual variability of windstorm frequency over parts of Europe in two of these forecast suites (ECMWF-S4 and GloSea5) indicating the potential use of current seasonal forecast systems. In a regression model we further derive windstorm variability using the forecasted NAO from the seasonal model suites thus estimating the suitability of the NAO as the only predictor. We find that the NAO as the main large-scale mode over Europe can explain some of the achieved skill and is therefore an important source of variability in the seasonal models. However, our results show that the regression model fails to reproduce the skill level of the directly forecast windstorm frequency over large areas of central Europe. This suggests that the seasonal models also capture other sources of variability/predictability of windstorms than the NAO. In order to investigate which other large-scale variability modes steer the interannual variability of windstorms we develop a statistical model using a Poisson GLM. We find that the Scandinavian Pattern (SCA) in fact explains a larger amount of variability for Central Europe during the 20th century than the NAO. This statistical model is able to skilfully reproduce the interannual variability of windstorm frequency especially for the British Isles and Central Europe with correlations up to 0.8.

  6. The influence of Seychelles Dome on the large scale Tropical Variability

    Science.gov (United States)

    Manola, Iris; Selten, Frank; Hazeleger, Wilco

    2013-04-01

    The Seychelles Dome (SD) is the thermocline ridge just South of the equator in the Western Indian Ocean basin. It is characterized by strong atmospheric convection and a shallow thermocline and is associated with large intraseasonal convection and SST variability (Harrison and Vecchi 2001). The SD is influenced by surface and subsurface processes, such as air-sea fluxes, Ekman upwelling from wind stress curl, ocean dynamics (vertical mixing) and oceanic Rossby waves from southeastern Indian Ocean. The favoring season for a strong SD is the boreal winter, where the thermocline is most shallow. Then the southeasterly trade winds converge with the northwesterly monsoonal winds over the intertropical convergence zone and cause cyclonic wind stress curl that drives Ekman divergence and a ridging of the thermocline. It is found that the subseasonal and interranual variability of the SD is influenced by large scale events, such as the Indian Ocean Dipole (IOD), the ENSO and the Madden-Julian Oscillation (MJO) (Tozuka et al., 2010, Lloyd and Vecchi, 2010). The SD is enhanced by cooling events in the Western Indian Ocean and easterly winds that raise the thermocline and increase the upwelling. This can be associated with a strong Walker circulation, like negative IOD conditions or La Nina-like conditions. So far the studies focus on the origins of the SD variability, but the influence of the SD itself on regional or large scale climate is largely unknown. In this study we focus on the influence of the SD variations on the large scale tropical circulation. We analyze the covariance of the SD variations and the tropical circulation in a 200 year control imulation of the climate model EC-EARTH and perform idealized SST forced simulations to study the character of the atmospheric response and its relation to ENSO, IOD and MJO. References -Harrison, D. E. and G. A. Vecchi, 2001: January 1999 Indian Ocean cooling event. Geophys. Res. Lett., 28, 3717-3720. -Lloyd, I. D., and G. A

  7. Storm-tracks interannual variability and large-scale climate modes

    Science.gov (United States)

    Liberato, Margarida L. R.; Trigo, Isabel F.; Trigo, Ricardo M.

    2013-04-01

    In this study we focus on the interannual variability and observed changes in northern hemisphere mid-latitude storm-tracks and relate them to large scale atmospheric circulation variability modes. Extratropical storminess, cyclones dominant paths, frequency and intensity have long been the object of climatological studies. The analysis of storm characteristics and historical trends presented here is based on the cyclone detecting and tracking algorithm first developed for the Mediterranean region (Trigo et al. 1999) and recently extended to a larger Euro-Atlantic region (Trigo 2006). The objective methodology, which identifies and follows individual lows as minima in SLP fields, fulfilling a set of conditions regarding the central pressure and the pressure gradient, is applied to the northern hemisphere 6-hourly geopotential data at 1000 hPa from the 20th Century Reanalyses (20CRv2) project and from reanalyses datasets provided by the European Centre for Medium-Range Weather Forecasts (ECMWF): ERA-40 and ERA Interim reanalyses. First, we assess the interannual variability and cyclone frequency trends for each of the datasets, for the 20th century and for the period between 1958 and 2002 using the highest spatial resolution available (1.125° x 1.125°) from the ERA-40 data. Results show that winter variability of storm paths, cyclone frequency and travel times is in agreement with the reported variability in a number of large-scale climate patterns (including the North Atlantic Oscillation, the East Atlantic Pattern and the Scandinavian Pattern). In addition, three storm-track databases are built spanning the common available extended winter seasons from October 1979 to March 2002. Although relatively short, this common period allows a comparison of systems represented in reanalyses datasets with distinct horizontal resolutions. This exercise is mostly focused on the key areas of cyclogenesis and cyclolysis and main cyclone characteristics over the northern

  8. Relationship between Eurasian large-scale patterns and regional climate variability over the Black and Baltic Seas

    Energy Technology Data Exchange (ETDEWEB)

    Stankunavicius, G.; Pupienis, D. [Vilnius Univ. (Lithuania). Dept. of Hydrology and Climatology; Basharin, D. [National Academy of Science of Ukraine, Sevastopol (Ukraine). Sevastopol Marine Hydrophysical Inst.

    2012-11-01

    Using a NCEP/NCAR Reanalysis dataset and the empirical orthogonal function (EOF) analysis approach we studied interannual to decadal variabilities of the sea-level air pressure (SLP) and the surface air temperature (SAT) fields over Eurasia during the 2nd part of the 20th century. Our results agree with those of the previous studies, which conclude that Eurasian trends are the result of storm-path changes driven by the interdecadal behaviour of the NAO-like meridional dipole pattern in the Atlantic. On interannual and decadal time scales, significant synchronous correlations between correspondent modes of SAT and SLP EOF patterns were found. This fact suggests that there is a strong and stable Eurasian interrelationship between SAT and SLP large-scale fields which affects the local climate of two sub-regions: the Black and Baltic Seas. The climate variability in these sub-regions was studied in terms of Eurasian large-scale surface-temperature and air-pressure patterns responses. We concluded that the sub-regional climate variability substantially differs over the Black and Baltic Seas, and depends on different Eurasian large-scale patterns. We showed that the Baltic Sea region is influenced by the patterns arising primary from NAO-like meridional dipole, as well as Scandinavian patterns, while the Black Sea's SAT/SLP variability is influenced mainly by the second mode EOF (eastern Atlantic) and large scale tropospheric wave structures. (orig.)

  9. R Aquarii - the large-scale optical nebula and the Mira variable position

    International Nuclear Information System (INIS)

    Michalitsianos, A.G.; Oliversen, R.J.; Hollis, J.M.; Kafatos, M.; Crull, H.E.

    1988-01-01

    The R Aquarii symbiotic star system is surrounded by a large-scale optical nebula. Observations of the nebular forbidden O III structure are presented and its morphological significance are discussed in context with previously observed small-scale radio-continuum features, which may be related. It is suggested that a precessing accretion disk may explain the global features of both the large-scale optical emission and the small-scale radio emission. Moreover, an accurate position has been determined of the system's Mira, which suggests that a recent theoretical model, yielding an egg-shaped central H II region for symbiotic systems with certain physical parameters, may apply to R Aquarii. The optical position of the 387 d period Mira variable is consistent with previous findings in the radio, that SiO maser emission is far removed from the Mira photosphere. 25 references

  10. Hydroclimatic variability in the Lake Mondsee region and its relationships with large-scale climate anomaly patterns

    Science.gov (United States)

    Rimbu, Norel; Ionita, Monica; Swierczynski, Tina; Brauer, Achim; Kämpf, Lucas; Czymzik, Markus

    2017-04-01

    Flood triggered detrital layers in varved sediments of Lake Mondsee, located at the northern fringe of the European Alps (47°48'N,13°23'E), provide an important archive of regional hydroclimatic variability during the mid- to late Holocene. To improve the interpretation of the flood layer record in terms of large-scale climate variability, we investigate the relationships between observational hydrological records from the region, like the Mondsee lake level, the runoff of the lake's main inflow Griesler Ache, with observed precipitation and global climate patterns. The lake level shows a strong positive linear trend during the observational period in all seasons. Additionally, lake level presents important interannual to multidecadal variations. These variations are associated with distinct seasonal atmospheric circulation patterns. A pronounced anomalous anticyclonic center over the Iberian Peninsula is associated with high lake levels values during winter. This center moves southwestward during spring, summer and autumn. In the same time, a cyclonic anomaly center is recorded over central and western Europe. This anomalous circulation extends southwestward from winter to autumn. Similar atmospheric circulation patterns are associated with river runoff and precipitation variability from the region. High lake levels are associated with positive local precipitation anomalies in all seasons as well as with negative local temperature anomalies during spring, summer and autumn. A correlation analysis reveals that lake level, runoff and precipitation variability is related to large-scale sea surface temperature anomaly patterns in all seasons suggesting a possible impact of large-scale climatic modes, like the North Atlantic Oscillation and Atlantic Multidecadal Oscillation on hydroclimatic variability in the Lake Mondsee region. The results presented in this study can be used for a more robust interpretation of the long flood layer record from Lake Mondsee sediments

  11. Assessing large-scale weekly cycles in meteorological variables: a review

    Directory of Open Access Journals (Sweden)

    A. Sanchez-Lorenzo

    2012-07-01

    Full Text Available Several studies have claimed to have found significant weekly cycles of meteorological variables appearing over large domains, which can hardly be related to urban effects exclusively. Nevertheless, there is still an ongoing scientific debate whether these large-scale weekly cycles exist or not, and some other studies fail to reproduce them with statistical significance. In addition to the lack of the positive proof for the existence of these cycles, their possible physical explanations have been controversially discussed during the last years. In this work we review the main results about this topic published during the recent two decades, including a summary of the existence or non-existence of significant weekly weather cycles across different regions of the world, mainly over the US, Europe and Asia. In addition, some shortcomings of common statistical methods for analyzing weekly cycles are listed. Finally, a brief summary of supposed causes of the weekly cycles, focusing on the aerosol-cloud-radiation interactions and their impact on meteorological variables as a result of the weekly cycles of anthropogenic activities, and possible directions for future research, is presented.

  12. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    Science.gov (United States)

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem

  13. Scheduling of power generation a large-scale mixed-variable model

    CERN Document Server

    Prékopa, András; Strazicky, Beáta; Deák, István; Hoffer, János; Németh, Ágoston; Potecz, Béla

    2014-01-01

    The book contains description of a real life application of modern mathematical optimization tools in an important problem solution for power networks. The objective is the modelling and calculation of optimal daily scheduling of power generation, by thermal power plants,  to satisfy all demands at minimum cost, in such a way that the  generation and transmission capacities as well as the demands at the nodes of the system appear in an integrated form. The physical parameters of the network are also taken into account. The obtained large-scale mixed variable problem is relaxed in a smart, practical way, to allow for fast numerical solution of the problem.

  14. African aerosol and large-scale precipitation variability over West Africa

    International Nuclear Information System (INIS)

    Huang Jingfeng; Zhang Chidong; Prospero, Joseph M

    2009-01-01

    We investigated the large-scale connection between African aerosol and precipitation in the West African Monsoon (WAM) region using 8-year (2000-2007) monthly and daily Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol products (aerosol optical depth, fine mode fraction) and Tropical Rainfall Measuring Mission (TRMM) precipitation and rain type. These high-quality data further confirmed our previous results that the large-scale link between aerosol and precipitation in this region undergoes distinct seasonal and spatial variability. Previously detected suppression of precipitation during months of high aerosol concentration occurs in both convective and stratiform rain, but not systematically in shallow rain. This suggests the suppression of deep convection due to the aerosol. Based on the seasonal cycle of dust and smoke and their geographical distribution, our data suggest that both dust (coarse mode aerosol) and smoke (fine mode aerosol) contribute to the precipitation suppression. However, the dust effect is evident over the Gulf of Guinea while the smoke effect is evident over both land and ocean. A back trajectory analysis further demonstrates that the precipitation reduction is statistically linked to the upwind aerosol concentration. This study suggests that African aerosol outbreaks in the WAM region can influence precipitation in the local monsoon system which has direct societal impact on the local community. It calls for more systematic investigations to determine the modulating mechanisms using both observational and modeling approaches.

  15. Interannual Variability in the Position and Strength of the East Asian Jet Stream and Its Relation to Large - scale Circulation

    Science.gov (United States)

    Chan, Duo; Zhang, Yang; Wu, Qigang

    2013-04-01

    East Asian Jet Stream (EASJ) is charactered by obvious interannual variability in strength and position (latitude), with wide impacts on East Asian climate in all seasons. In this study, two indices are established to measure the interannual variability in intensity and position of EAJS. Possible causing factors, including both local signals and non-local large-scale circulation, are examined using NCAP-NCAR reanalysis data to investigate their relations with jet variation. Our analysis shows that the relationship between the interannual variations of EASJ and these factors depends on seasons. In the summer, both the intensity and position of EASJ are closely related to the meridional gradient of local surface temperature, but display no apparent relationship with the larg-scale circulation. In cold seasons (autumn, winter and spring), both the local factor and the large-scale circulation, i.e. the Pacific/North American teleconnection pattern (PNA), play important roles in the interannual variability of the jet intensity. The variability in the jet position, however, is more correlated to the Arctic Oscillation (AO), especially in winter. Diagnostic analysis indicates that transient eddy activity plays an important role in connecting the interannual variability of EASJ position with AO.

  16. Nature of global large-scale sea level variability in relation to atmospheric forcing: A modeling study

    Science.gov (United States)

    Fukumori, Ichiro; Raghunath, Ramanujam; Fu, Lee-Lueng

    1998-03-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equation model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to January 1994. The physical nature of sea level's temporal variability from periods of days to a year is examined on the basis of spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements. The study elucidates and diagnoses the inhomogeneous physics of sea level change in space and frequency domain. At midlatitudes, large-scale sea level variability is primarily due to steric changes associated with the seasonal heating and cooling cycle of the surface layer. In comparison, changes in the tropics and high latitudes are mainly wind driven. Wind-driven variability exhibits a strong latitudinal dependence in itself. Wind-driven changes are largely baroclinic in the tropics but barotropic at higher latitudes. Baroclinic changes are dominated by the annual harmonic of the first baroclinic mode and is largest off the equator; variabilities associated with equatorial waves are smaller in comparison. Wind-driven barotropic changes exhibit a notable enhancement over several abyssal plains in the Southern Ocean, which is likely due to resonant planetary wave modes in basins semienclosed by discontinuities in potential vorticity. Otherwise, barotropic sea level changes are typically dominated by high frequencies with as much as half the total variance in periods shorter than 20 days, reflecting the frequency spectra of wind stress curl. Implications of the findings with regards to analyzing observations and data assimilation are discussed.

  17. RECOVERY OF LARGE ANGULAR SCALE CMB POLARIZATION FOR INSTRUMENTS EMPLOYING VARIABLE-DELAY POLARIZATION MODULATORS

    Energy Technology Data Exchange (ETDEWEB)

    Miller, N. J.; Marriage, T. A.; Appel, J. W.; Bennett, C. L.; Eimer, J.; Essinger-Hileman, T.; Harrington, K.; Rostem, K.; Watts, D. J. [Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218 (United States); Chuss, D. T. [Department of Physics, Villanova University, 800 E Lancaster, Villanova, PA 19085 (United States); Wollack, E. J.; Fixsen, D. J.; Moseley, S. H.; Switzer, E. R., E-mail: Nathan.J.Miller@nasa.gov [Observational Cosmology Laboratory, Code 665, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2016-02-20

    Variable-delay Polarization Modulators (VPMs) are currently being implemented in experiments designed to measure the polarization of the cosmic microwave background on large angular scales because of their capability for providing rapid, front-end polarization modulation and control over systematic errors. Despite the advantages provided by the VPM, it is important to identify and mitigate any time-varying effects that leak into the synchronously modulated component of the signal. In this paper, the effect of emission from a 300 K VPM on the system performance is considered and addressed. Though instrument design can greatly reduce the influence of modulated VPM emission, some residual modulated signal is expected. VPM emission is treated in the presence of rotational misalignments and temperature variation. Simulations of time-ordered data are used to evaluate the effect of these residual errors on the power spectrum. The analysis and modeling in this paper guides experimentalists on the critical aspects of observations using VPMs as front-end modulators. By implementing the characterizations and controls as described, front-end VPM modulation can be very powerful for mitigating 1/f noise in large angular scale polarimetric surveys. None of the systematic errors studied fundamentally limit the detection and characterization of B-modes on large scales for a tensor-to-scalar ratio of r = 0.01. Indeed, r < 0.01 is achievable with commensurately improved characterizations and controls.

  18. The Nature of Global Large-scale Sea Level Variability in Relation to Atmospheric Forcing: A Modeling Study

    Science.gov (United States)

    Fukumori, I.; Raghunath, R.; Fu, L. L.

    1996-01-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.

  19. A Poisson regression approach to model monthly hail occurrence in Northern Switzerland using large-scale environmental variables

    Science.gov (United States)

    Madonna, Erica; Ginsbourger, David; Martius, Olivia

    2018-05-01

    In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.

  20. State of the Art in Large-Scale Soil Moisture Monitoring

    Science.gov (United States)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  1. Large Scale Variability of Phytoplankton Blooms in the Arctic and Peripheral Seas: Relationships with Sea Ice, Temperature, Clouds, and Wind

    Science.gov (United States)

    Comiso, Josefino C.; Cota, Glenn F.

    2004-01-01

    Spatially detailed satellite data of mean color, sea ice concentration, surface temperature, clouds, and wind have been analyzed to quantify and study the large scale regional and temporal variability of phytoplankton blooms in the Arctic and peripheral seas from 1998 to 2002. In the Arctic basin, phytoplankton chlorophyll displays a large symmetry with the Eastern Arctic having about fivefold higher concentrations than those of the Western Arctic. Large monthly and yearly variability is also observed in the peripheral seas with the largest blooms occurring in the Bering Sea, Sea of Okhotsk, and the Barents Sea during spring. There is large interannual and seasonal variability in biomass with average chlorophyll concentrations in 2002 and 2001 being higher than earlier years in spring and summer. The seasonality in the latitudinal distribution of blooms is also very different such that the North Atlantic is usually most expansive in spring while the North Pacific is more extensive in autumn. Environmental factors that influence phytoplankton growth were examined, and results show relatively high negative correlation with sea ice retreat and strong positive correlation with temperature in early spring. Plankton growth, as indicated by biomass accumulation, in the Arctic and subarctic increases up to a threshold surface temperature of about 276-277 degree K (3-4 degree C) beyond which the concentrations start to decrease suggesting an optimal temperature or nutrient depletion. The correlation with clouds is significant in some areas but negligible in other areas, while the correlations with wind speed and its components are generally weak. The effects of clouds and winds are less predictable with weekly climatologies because of unknown effects of averaging variable and intermittent physical forcing (e.g. over storm event scales with mixing and upwelling of nutrients) and the time scales of acclimation by the phytoplankton.

  2. Regression-based season-ahead drought prediction for southern Peru conditioned on large-scale climate variables

    Science.gov (United States)

    Mortensen, Eric; Wu, Shu; Notaro, Michael; Vavrus, Stephen; Montgomery, Rob; De Piérola, José; Sánchez, Carlos; Block, Paul

    2018-01-01

    Located at a complex topographic, climatic, and hydrologic crossroads, southern Peru is a semiarid region that exhibits high spatiotemporal variability in precipitation. The economic viability of the region hinges on this water, yet southern Peru is prone to water scarcity caused by seasonal meteorological drought. Meteorological droughts in this region are often triggered during El Niño episodes; however, other large-scale climate mechanisms also play a noteworthy role in controlling the region's hydrologic cycle. An extensive season-ahead precipitation prediction model is developed to help bolster the existing capacity of stakeholders to plan for and mitigate deleterious impacts of drought. In addition to existing climate indices, large-scale climatic variables, such as sea surface temperature, are investigated to identify potential drought predictors. A principal component regression framework is applied to 11 potential predictors to produce an ensemble forecast of regional January-March precipitation totals. Model hindcasts of 51 years, compared to climatology and another model conditioned solely on an El Niño-Southern Oscillation index, achieve notable skill and perform better for several metrics, including ranked probability skill score and a hit-miss statistic. The information provided by the developed model and ancillary modeling efforts, such as extending the lead time of and spatially disaggregating precipitation predictions to the local level as well as forecasting the number of wet-dry days per rainy season, may further assist regional stakeholders and policymakers in preparing for drought.

  3. North Atlantic explosive cyclones and large scale atmospheric variability modes

    Science.gov (United States)

    Liberato, Margarida L. R.

    2015-04-01

    Extreme windstorms are one of the major natural catastrophes in the extratropics, one of the most costly natural hazards in Europe and are responsible for substantial economic damages and even fatalities. During the last decades Europe witnessed major damage from winter storms such as Lothar (December 1999), Kyrill (January 2007), Klaus (January 2009), Xynthia (February 2010), Gong (January 2013) and Stephanie (February 2014) which exhibited uncommon characteristics. In fact, most of these storms crossed the Atlantic in direction of Europe experiencing an explosive development at unusual lower latitudes along the edge of the dominant North Atlantic storm track and reaching Iberia with an uncommon intensity (Liberato et al., 2011; 2013; Liberato 2014). Results show that the explosive cyclogenesis process of most of these storms at such low latitudes is driven by: (i) the southerly displacement of a very strong polar jet stream; and (ii) the presence of an atmospheric river (AR), that is, by a (sub)tropical moisture export over the western and central (sub)tropical Atlantic which converges into the cyclogenesis region and then moves along with the storm towards Iberia. Previous studies have pointed to a link between the North Atlantic Oscillation (NAO) and intense European windstorms. On the other hand, the NAO exerts a decisive control on the average latitudinal location of the jet stream over the North Atlantic basin (Woollings et al. 2010). In this work the link between North Atlantic explosive cyclogenesis, atmospheric rivers and large scale atmospheric variability modes is reviewed and discussed. Liberato MLR (2014) The 19 January 2013 windstorm over the north Atlantic: Large-scale dynamics and impacts on Iberia. Weather and Climate Extremes, 5-6, 16-28. doi: 10.1016/j.wace.2014.06.002 Liberato MRL, Pinto JG, Trigo IF, Trigo RM. (2011) Klaus - an exceptional winter storm over Northern Iberia and Southern France. Weather 66:330-334. doi:10.1002/wea.755 Liberato

  4. North Atlantic cyclones; trends, impacts and links to large-scale variability

    Science.gov (United States)

    Trigo, R. M.; Trigo, I. F.; Ramos, A. M.; Paredes, D.; Garcia-Herrera, R.; Liberato, M. L. R.; Valente, M. A.

    2009-04-01

    Based on the cyclone detection and tracking algorithm previously developed (Trigo, 2006) we have assessed the inter-annual variability and cyclone frequency trends between 1960 and 2000 for the Euro-Atlantic sector using the highest spatial resolution available (1.125° x 1.125°) from the ERA-40 Surface Level Pressure. Additionally, trends for the u and v wind speed components are also computed at the monthly and seasonal scales, using the same dataset. All cyclone and wind speed trend maps were computed with the corresponding statistical significance field. Results reveal a significant frequency decrease (increase) in the western Mediterranean (Greenland and Scandinavia), particularly in December, February and March. Seasonal and monthly analysis of wind speed trends shows similar spatial patterns. We show that these changes in the frequency of low pressure centers and the associated wind patterns are partially responsible for trends of the significant height of waves. Throughout the extended winter months (ONDJFM), regions with positive (negative) wind magnitude trends, of up to 5 cm/s per year, often correspond to regions of positive (negative) significant wave height trends. The cyclone and wind speed trends computed for the JFM months are well matched by the corresponding trends in significant wave height, with February being the month with the highest trends (negative south of 50°N up to -3 cm/year, and positive up to 5cm/year just north of Scotland). Using precipitation data from ECMWF reanalyses and a CRU high resolution dataset we show the impact of these trends in cyclone frequencies upon the corresponding precipitation trends in the influenced areas. It is also shown that these changes are partially linked to major shifts on the indices of large-scale patterns modes, namely the North Atlantic Oscillation (NAO), the Eastern Atlantic (EA) and the Scandinavian Patterns (SCAN). Trigo, I. F. 2006: Climatology and Interannual Variability of Storm-Tracks in

  5. Sensitivity of tree ring growth to local and large-scale climate variability in a region of Southeastern Brazil

    Science.gov (United States)

    Venegas-González, Alejandro; Chagas, Matheus Peres; Anholetto Júnior, Claudio Roberto; Alvares, Clayton Alcarde; Roig, Fidel Alejandro; Tomazello Filho, Mario

    2016-01-01

    We explored the relationship between tree growth in two tropical species and local and large-scale climate variability in Southeastern Brazil. Tree ring width chronologies of Tectona grandis (teak) and Pinus caribaea (Caribbean pine) trees were compared with local (Water Requirement Satisfaction Index—WRSI, Standardized Precipitation Index—SPI, and Palmer Drought Severity Index—PDSI) and large-scale climate indices that analyze the equatorial pacific sea surface temperature (Trans-Niño Index-TNI and Niño-3.4-N3.4) and atmospheric circulation variations in the Southern Hemisphere (Antarctic Oscillation-AAO). Teak trees showed positive correlation with three indices in the current summer and fall. A significant correlation between WRSI index and Caribbean pine was observed in the dry season preceding tree ring formation. The influence of large-scale climate patterns was observed only for TNI and AAO, where there was a radial growth reduction in months preceding the growing season with positive values of the TNI in teak trees and radial growth increase (decrease) during December (March) to February (May) of the previous (current) growing season with positive phase of the AAO in teak (Caribbean pine) trees. The development of a new dendroclimatological study in Southeastern Brazil sheds light to local and large-scale climate influence on tree growth in recent decades, contributing in future climate change studies.

  6. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  7. Variability in large-scale wind power generation

    DEFF Research Database (Denmark)

    Kiviluoma, Juha; Holttinen, Hannele; Weir, David

    2016-01-01

    The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net ...... with well-dispersed wind power. Copyright © 2015 John Wiley & Sons, Ltd....

  8. Impact of the Dominant Large-scale Teleconnections on Winter Temperature Variability over East Asia

    Science.gov (United States)

    Lim, Young-Kwon; Kim, Hae-Dong

    2013-01-01

    Monthly mean geopotential height for the past 33 DJF seasons archived in Modern Era Retrospective analysis for Research and Applications reanalysis is decomposed into the large-scale teleconnection patterns to explain their impacts on winter temperature variability over East Asia. Following Arctic Oscillation (AO) that explains the largest variance, East Atlantic/West Russia (EA/WR), West Pacific (WP) and El Nino-Southern Oscillation (ENSO) are identified as the first four leading modes that significantly explain East Asian winter temperature variation. While the northern part of East Asia north of 50N is prevailed by AO and EA/WR impacts, temperature in the midlatitudes (30N-50N), which include Mongolia, northeastern China, Shandong area, Korea, and Japan, is influenced by combined effect of the four leading teleconnections. ENSO impact on average over 33 winters is relatively weaker than the impact of the other three teleconnections. WP impact, which has received less attention than ENSO in earlier studies, characterizes winter temperatures over Korea, Japan, and central to southern China region south of 30N mainly by advective process from the Pacific. Upper level wave activity fluxes reveal that, for the AO case, the height and circulation anomalies affecting midlatitude East Asian winter temperature is mainly located at higher latitudes north of East Asia. Distribution of the fluxes also explains that the stationary wave train associated with EA/WR propagates southeastward from the western Russia, affecting the East Asian winter temperature. Investigation on the impact of each teleconnection for the selected years reveals that the most dominant teleconnection over East Asia is not the same at all years, indicating a great deal of interannual variability. Comparison in temperature anomaly distributions between observation and temperature anomaly constructed using the combined effect of four leading teleconnections clearly show a reasonable consistency between

  9. Effects of climate variability on global scale flood risk

    Science.gov (United States)

    Ward, P.; Dettinger, M. D.; Kummu, M.; Jongman, B.; Sperna Weiland, F.; Winsemius, H.

    2013-12-01

    In this contribution we demonstrate the influence of climate variability on flood risk. Globally, flooding is one of the worst natural hazards in terms of economic damages; Munich Re estimates global losses in the last decade to be in excess of $240 billion. As a result, scientifically sound estimates of flood risk at the largest scales are increasingly needed by industry (including multinational companies and the insurance industry) and policy communities. Several assessments of global scale flood risk under current and conditions have recently become available, and this year has seen the first studies assessing how flood risk may change in the future due to global change. However, the influence of climate variability on flood risk has as yet hardly been studied, despite the fact that: (a) in other fields (drought, hurricane damage, food production) this variability is as important for policy and practice as long term change; and (b) climate variability has a strong influence in peak riverflows around the world. To address this issue, this contribution illustrates the influence of ENSO-driven climate variability on flood risk, at both the globally aggregated scale and the scale of countries and large river basins. Although it exerts significant and widespread influences on flood peak discharges in many parts of the world, we show that ENSO does not have a statistically significant influence on flood risk once aggregated to global totals. At the scale of individual countries, though, strong relationships exist over large parts of the Earth's surface. For example, we find particularly strong anomalies of flood risk in El Niño or La Niña years (compared to all years) in southern Africa, parts of western Africa, Australia, parts of Central Eurasia (especially for El Niño), the western USA (especially for La Niña), and parts of South America. These findings have large implications for both decadal climate-risk projections and long-term future climate change

  10. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP

    Science.gov (United States)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-01

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version of the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. Other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.

  11. Observing the Cosmic Microwave Background Polarization with Variable-delay Polarization Modulators for the Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Harrington, Kathleen; CLASS Collaboration

    2018-01-01

    The search for inflationary primordial gravitational waves and the optical depth to reionization, both through their imprint on the large angular scale correlations in the polarization of the cosmic microwave background (CMB), has created the need for high sensitivity measurements of polarization across large fractions of the sky at millimeter wavelengths. These measurements are subjected to instrumental and atmospheric 1/f noise, which has motivated the development of polarization modulators to facilitate the rejection of these large systematic effects.Variable-delay polarization modulators (VPMs) are used in the Cosmology Large Angular Scale Surveyor (CLASS) telescopes as the first element in the optical chain to rapidly modulate the incoming polarization. VPMs consist of a linearly polarizing wire grid in front of a moveable flat mirror; varying the distance between the grid and the mirror produces a changing phase shift between polarization states parallel and perpendicular to the grid which modulates Stokes U (linear polarization at 45°) and Stokes V (circular polarization). The reflective and scalable nature of the VPM enables its placement as the first optical element in a reflecting telescope. This simultaneously allows a lock-in style polarization measurement and the separation of sky polarization from any instrumental polarization farther along in the optical chain.The Q-Band CLASS VPM was the first VPM to begin observing the CMB full time in 2016. I will be presenting its design and characterization as well as demonstrating how modulating polarization significantly rejects atmospheric and instrumental long time scale noise.

  12. Interannual variability of Central European mean temperature in January / February and its relation to the large-scale circulation

    International Nuclear Information System (INIS)

    Werner, P.C.; Storch, H. von

    1993-01-01

    The Central European temperature distribution field, as given by 11 stations (Fanoe, Hamburg, Potsdam, Jena, Frankfurt, Uccle, Hohenpeissenberg, Praha, Wien, Zuerich and Geneve), is analysed with respect to its year-to-year variability. January-February (JF) average temperatures are considered for the interval 1901-80. An Orthogonal Function (EOF) analysis reveals that the JF temperature variability is almost entirely controlled by one EOF with uniform sign. The second EOF represents only 7% of the total variance and describes a north-south gradient. The time coefficient of the first EOF is almost stationary whereas the second pattern describes a slight downward trend at the northern stations and a slight upward trend at the southern stations. The relationship of the temperature field to the large-scale circulation, represented by the North Atlantic/European sea-level pressure (SLP) field, is investigated by means of a Canonical Correlation (CCA) Analysis. Two CCA pairs are identified which account for most of the temperature year-to-year variance and which suggest plausible mechanisms. The CCA pairs fail, however, to consistently link the long-term temperature trends to changes in the large-scale circulation. In the output of a 100-year run with a coupled atmosphere-ocean model (ECHAM1/LSG), the same CCA pairs are found but the strength of the link between Central European temperature and North Atlantic SLP is markedly weaker than in the observed data. (orig.)

  13. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  14. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  15. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  16. Large-Scale Processes Associated with Inter-Decadal and Inter-Annual Early Spring Rainfall Variability in Taiwan

    Directory of Open Access Journals (Sweden)

    Jau-Ming Chen

    2016-02-01

    Full Text Available Early spring (March - April rainfall in Taiwan exhibits evident and distinct inter-annual and inter-decadal variability. The inter-annual varibility has a positive correlation with the El Niño/Southern Oscillation while the inter-decadal variability features a phase change beginning in the late 1970s, coherent with the major phase change in the Pacific decadal oscillation. Rainfall variability in both timescales is regulated by large-scale processes showing consistent dynamic features. Rainfall increases are associated with positive sea surface temperature (SST anomalies in the tropical eastern Pacific and negative SST anomalies in the tropical central Pacific. An anomalous lower-level divergent center appears in the tropical central Pacific. Via a Rossby-wave-like response, an anomalous lower-level anticyclone appears to the southeast of Taiwan over the Philippine Sea-tropical western Pacific region, which is accompanied by an anomalous cyclone to the north-northeast of Taiwan. Both circulation anomalies induce anomalous southwesterly flows to enhance moisture flux from the South China Sea onto Taiwan, resulting in significant moisture convergence nearby Taiwan. With enhanced moisture supplied by anomalous southwesterly flows, significant rainfall increases occur in both inter-annual and inter-decadal timescales in early spring rainfall on Taiwan.

  17. Energy modeling and analysis for optimal grid integration of large-scale variable renewables using hydrogen storage in Japan

    International Nuclear Information System (INIS)

    Komiyama, Ryoichi; Otsuki, Takashi; Fujii, Yasumasa

    2015-01-01

    Although the extensive introduction of VRs (variable renewables) will play an essential role to resolve energy and environmental issues in Japan after the Fukushima nuclear accident, its large-scale integration would pose a technical challenge in the grid management; as one of technical countermeasures, hydrogen storage receives much attention, as well as rechargeable battery, for controlling the intermittency of VR power output. For properly planning renewable energy policies, energy system modeling is important to quantify and qualitatively understand its potential benefits and impacts. This paper analyzes the optimal grid integration of large-scale VRs using hydrogen storage in Japan by developing a high time-resolution optimal power generation mix model. Simulation results suggest that the installation of hydrogen storage is promoted by both its cost reduction and CO 2 regulation policy. In addition, hydrogen storage turns out to be suitable for storing VR energy in a long period of time. Finally, through a sensitivity analysis of rechargeable battery cost, hydrogen storage is economically competitive with rechargeable battery; the cost of both technologies should be more elaborately recognized for formulating effective energy policies to integrate massive VRs into the country's power system in an economical manner. - Highlights: • Authors analyze hydrogen storage coupled with VRs (variable renewables). • Simulation analysis is done by developing an optimal power generation mix model. • Hydrogen storage installation is promoted by its cost decline and CO 2 regulation. • Hydrogen storage is suitable for storing VR energy in a long period of time. • Hydrogen storage is economically competitive with rechargeable battery

  18. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    International Nuclear Information System (INIS)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs

  19. Large-scale climatic anomalies affect marine predator foraging behaviour and demography

    Science.gov (United States)

    Bost, Charles A.; Cotté, Cedric; Terray, Pascal; Barbraud, Christophe; Bon, Cécile; Delord, Karine; Gimenez, Olivier; Handrich, Yves; Naito, Yasuhiko; Guinet, Christophe; Weimerskirch, Henri

    2015-10-01

    Determining the links between the behavioural and population responses of wild species to environmental variations is critical for understanding the impact of climate variability on ecosystems. Using long-term data sets, we show how large-scale climatic anomalies in the Southern Hemisphere affect the foraging behaviour and population dynamics of a key marine predator, the king penguin. When large-scale subtropical dipole events occur simultaneously in both subtropical Southern Indian and Atlantic Oceans, they generate tropical anomalies that shift the foraging zone southward. Consequently the distances that penguins foraged from the colony and their feeding depths increased and the population size decreased. This represents an example of a robust and fast impact of large-scale climatic anomalies affecting a marine predator through changes in its at-sea behaviour and demography, despite lack of information on prey availability. Our results highlight a possible behavioural mechanism through which climate variability may affect population processes.

  20. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Science.gov (United States)

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  1. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available In this paper, the Hager and Zhang (HZ conjugate gradient (CG method and the modified HZ (MHZ CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables.

  2. Environment and host as large-scale controls of ectomycorrhizal fungi.

    Science.gov (United States)

    van der Linde, Sietse; Suz, Laura M; Orme, C David L; Cox, Filipa; Andreae, Henning; Asi, Endla; Atkinson, Bonnie; Benham, Sue; Carroll, Christopher; Cools, Nathalie; De Vos, Bruno; Dietrich, Hans-Peter; Eichhorn, Johannes; Gehrmann, Joachim; Grebenc, Tine; Gweon, Hyun S; Hansen, Karin; Jacob, Frank; Kristöfel, Ferdinand; Lech, Paweł; Manninger, Miklós; Martin, Jan; Meesenburg, Henning; Merilä, Päivi; Nicolas, Manuel; Pavlenda, Pavel; Rautio, Pasi; Schaub, Marcus; Schröck, Hans-Werner; Seidling, Walter; Šrámek, Vít; Thimonier, Anne; Thomsen, Iben Margrete; Titeux, Hugues; Vanguelova, Elena; Verstraeten, Arne; Vesterdal, Lars; Waldner, Peter; Wijk, Sture; Zhang, Yuxin; Žlindra, Daniel; Bidartondo, Martin I

    2018-06-06

    Explaining the large-scale diversity of soil organisms that drive biogeochemical processes-and their responses to environmental change-is critical. However, identifying consistent drivers of belowground diversity and abundance for some soil organisms at large spatial scales remains problematic. Here we investigate a major guild, the ectomycorrhizal fungi, across European forests at a spatial scale and resolution that is-to our knowledge-unprecedented, to explore key biotic and abiotic predictors of ectomycorrhizal diversity and to identify dominant responses and thresholds for change across complex environmental gradients. We show the effect of 38 host, environment, climate and geographical variables on ectomycorrhizal diversity, and define thresholds of community change for key variables. We quantify host specificity and reveal plasticity in functional traits involved in soil foraging across gradients. We conclude that environmental and host factors explain most of the variation in ectomycorrhizal diversity, that the environmental thresholds used as major ecosystem assessment tools need adjustment and that the importance of belowground specificity and plasticity has previously been underappreciated.

  3. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Science.gov (United States)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  4. The variability of tropical ice cloud properties as a function of the large-scale context from ground-based radar-lidar observations over Darwin, Australia

    Science.gov (United States)

    Protat, A.; Delanoë, J.; May, P. T.; Haynes, J.; Jakob, C.; O'Connor, E.; Pope, M.; Wheeler, M. C.

    2011-08-01

    The high complexity of cloud parameterizations now held in models puts more pressure on observational studies to provide useful means to evaluate them. One approach to the problem put forth in the modelling community is to evaluate under what atmospheric conditions the parameterizations fail to simulate the cloud properties and under what conditions they do a good job. It is the ambition of this paper to characterize the variability of the statistical properties of tropical ice clouds in different tropical "regimes" recently identified in the literature to aid the development of better process-oriented parameterizations in models. For this purpose, the statistical properties of non-precipitating tropical ice clouds over Darwin, Australia are characterized using ground-based radar-lidar observations from the Atmospheric Radiation Measurement (ARM) Program. The ice cloud properties analysed are the frequency of ice cloud occurrence, the morphological properties (cloud top height and thickness), and the microphysical and radiative properties (ice water content, visible extinction, effective radius, and total concentration). The variability of these tropical ice cloud properties is then studied as a function of the large-scale cloud regimes derived from the International Satellite Cloud Climatology Project (ISCCP), the amplitude and phase of the Madden-Julian Oscillation (MJO), and the large-scale atmospheric regime as derived from a long-term record of radiosonde observations over Darwin. The vertical variability of ice cloud occurrence and microphysical properties is largest in all regimes (1.5 order of magnitude for ice water content and extinction, a factor 3 in effective radius, and three orders of magnitude in concentration, typically). 98 % of ice clouds in our dataset are characterized by either a small cloud fraction (smaller than 0.3) or a very large cloud fraction (larger than 0.9). In the ice part of the troposphere three distinct layers characterized by

  5. Resolving meso-scale seabed variability using reflection measurements from an autonomous underwater vehicle.

    Science.gov (United States)

    Holland, Charles W; Nielsen, Peter L; Dettmer, Jan; Dosso, Stan

    2012-02-01

    Seabed geoacoustic variability is driven by geological processes that occur over a wide spectrum of space-time scales. While the acoustics community has some understanding of horizontal fine-scale geoacoustic variability, less than O(10(0)) m, and large-scale variability, greater than O(10(3)) m, there is a paucity of data resolving the geoacoustic meso-scale O(10(0)-10(3)) m. Measurements of the meso-scale along an ostensibly "benign" portion of the outer shelf reveal three classes of variability. The first class was expected and is due to horizontal variability of layer thicknesses: this was the only class that could be directly tied to seismic reflection data. The second class is due to rapid changes in layer properties and/or boundaries, occurring over scales of meters to hundreds of meters. The third class was observed as rapid variations of the angle/frequency dependent reflection coefficient within a single observation and is suggestive of variability at scales of meter or less. Though generally assumed to be negligible in acoustic modeling, the second and third classes are indicative of strong horizontal geoacoustic variability within a given layer. The observations give early insight into possible effects of horizontal geoacoustic variability on long-range acoustic propagation and reverberation. © 2012 Acoustical Society of America

  6. The interannual precipitation variability in the southern part of Iran as linked to large-scale climate modes

    Energy Technology Data Exchange (ETDEWEB)

    Pourasghar, Farnaz; Jahanbakhsh, Saeed; Sari Sarraf, Behrooz [The University of Tabriz, Department of Physical Geography, Faculty of Humanities and Social Science, Tabriz (Iran, Islamic Republic of); Tozuka, Tomoki [The University of Tokyo, Department of Earth and Planetary Science, Graduate School of Science, Tokyo (Japan); Ghaemi, Hooshang [Iran Meteorological Organization, Tehran (Iran, Islamic Republic of); Yamagata, Toshio [The University of Tokyo, Department of Earth and Planetary Science, Graduate School of Science, Tokyo (Japan); Application Laboratory/JAMSTEC, Yokohama, Kanagawa (Japan)

    2012-11-15

    The interannual variation of precipitation in the southern part of Iran and its link with the large-scale climate modes are examined using monthly data from 183 meteorological stations during 1974-2005. The majority of precipitation occurs during the rainy season from October to May. The interannual variation in fall and early winter during the first part of the rainy season shows apparently a significant positive correlation with the Indian Ocean Dipole (IOD) and El Nino-Southern Oscillation (ENSO). However, a partial correlation analysis used to extract the respective influence of IOD and ENSO shows a significant positive correlation only with the IOD and not with ENSO. The southeasterly moisture flux anomaly over the Arabian Sea turns anti-cyclonically and transport more moisture to the southern part of Iran from the Arabian Sea, the Red Sea, and the Persian Gulf during the positive IOD. On the other hand, the moisture flux has northerly anomaly over Iran during the negative IOD, which results in reduced moisture supply from the south. During the latter part of the rainy season in late winter and spring, the interannual variation of precipitation is more strongly influenced by modes of variability over the Mediterranean Sea. The induced large-scale atmospheric circulation anomaly controls moisture supply from the Red Sea and the Persian Gulf. (orig.)

  7. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  8. Basin-scale heterogeneity in Antarctic precipitation and its impact on surface mass variability

    Directory of Open Access Journals (Sweden)

    J. Fyke

    2017-11-01

    Full Text Available Annually averaged precipitation in the form of snow, the dominant term of the Antarctic Ice Sheet surface mass balance, displays large spatial and temporal variability. Here we present an analysis of spatial patterns of regional Antarctic precipitation variability and their impact on integrated Antarctic surface mass balance variability simulated as part of a preindustrial 1800-year global, fully coupled Community Earth System Model simulation. Correlation and composite analyses based on this output allow for a robust exploration of Antarctic precipitation variability. We identify statistically significant relationships between precipitation patterns across Antarctica that are corroborated by climate reanalyses, regional modeling and ice core records. These patterns are driven by variability in large-scale atmospheric moisture transport, which itself is characterized by decadal- to centennial-scale oscillations around the long-term mean. We suggest that this heterogeneity in Antarctic precipitation variability has a dampening effect on overall Antarctic surface mass balance variability, with implications for regulation of Antarctic-sourced sea level variability, detection of an emergent anthropogenic signal in Antarctic mass trends and identification of Antarctic mass loss accelerations.

  9. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2016-05-01

    Full Text Available Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB.In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  10. Generation of large-scale PV scenarios using aggregated power curves

    DEFF Research Database (Denmark)

    Nuño Martinez, Edgar; Cutululis, Nicolaos Antonio

    2017-01-01

    The contribution of solar photovoltaic (PV) power to the generation is becoming more relevant in modern power system. Therefore, there is a need to model the variability large-scale PV generation accurately. This paper presents a novel methodology to generate regional PV scenarios based...... on aggregated power curves rather than traditional physical PV conversion models. Our approach is based on hourly mesoscale reanalysis irradiation data and power measurements and do not require additional variables such as ambient temperature or wind speed. It was used to simulate the PV generation...... on the German system between 2012 and 2015 showing high levels of correlation with actual measurements (93.02–97.60%) and small deviations from the expected capacity factors (0.02–1.80%). Therefore, we are confident about the ability of the proposed model to accurately generate realistic large-scale PV...

  11. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-22

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  12. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby; Mai, Paul Martin; Genton, Marc G.; Zhang, Ling; Thingbaijam, Kiran Kumar

    2015-01-01

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  13. An ultrahigh vacuum fast-scanning and variable temperature scanning tunneling microscope for large scale imaging.

    Science.gov (United States)

    Diaconescu, Bogdan; Nenchev, Georgi; de la Figuera, Juan; Pohl, Karsten

    2007-10-01

    We describe the design and performance of a fast-scanning, variable temperature scanning tunneling microscope (STM) operating from 80 to 700 K in ultrahigh vacuum (UHV), which routinely achieves large scale atomically resolved imaging of compact metallic surfaces. An efficient in-vacuum vibration isolation and cryogenic system allows for no external vibration isolation of the UHV chamber. The design of the sample holder and STM head permits imaging of the same nanometer-size area of the sample before and after sample preparation outside the STM base. Refractory metal samples are frequently annealed up to 2000 K and their cooldown time from room temperature to 80 K is 15 min. The vertical resolution of the instrument was found to be about 2 pm at room temperature. The coarse motor design allows both translation and rotation of the scanner tube. The total scanning area is about 8 x 8 microm(2). The sample temperature can be adjusted by a few tens of degrees while scanning over the same sample area.

  14. Energy transfers in large-scale and small-scale dynamos

    Science.gov (United States)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  15. Impacts of Climate Variability on Latin American Small-scale Fisheries

    Directory of Open Access Journals (Sweden)

    Omar Defeo

    2013-12-01

    Full Text Available Small-scale fisheries (SSFs are social-ecological systems that play a critical role in terms of food security and poverty alleviation in Latin America. These fisheries are increasingly threatened by anthropogenic and climatic drivers acting at multiple scales. We review the effects of climate variability on Latin American SSFs, and discuss the combined effects of two additional human drivers: globalization of markets and governance. We show drastic long-term and large-scale effects of climate variability, e.g., sea surface temperature anomalies, wind intensity, sea level, and climatic indices, on SSFs. These variables, acting in concert with economic drivers, have exacerbated stock depletion rates in Latin American SSFs. The impact of these drivers varied according to the life cycle and latitudinal distribution of the target species, the characteristics of the oceanographic systems, and the inherent features of the social systems. Our review highlights the urgent need to improve management and governance systems to promote resilience as a way to cope with the increasing uncertainty about the impacts of climate and globalization of markets on Latin American SSFs.

  16. Mesoscale to Synoptic Scale Cloud Variability

    Science.gov (United States)

    Rossow, William B.

    1998-01-01

    The atmospheric circulation and its interaction with the oceanic circulation involve non-linear and non-local exchanges of energy and water over a very large range of space and time scales. These exchanges are revealed, in part, by the related variations of clouds, which occur on a similar range of scales as the atmospheric motions that produce them. Collection of comprehensive measurements of the properties of the atmosphere, clouds and surface allows for diagnosis of some of these exchanges. The use of a multi-satellite-network approach by the International Satellite Cloud Climatology Project (ISCCP) comes closest to providing complete coverage of the relevant range space and time scales over which the clouds, atmosphere and ocean vary. A nearly 15-yr dataset is now available that covers the range from 3 hr and 30 km to decade and planetary. This paper considers three topics: (1) cloud variations at the smallest scales and how they may influence radiation-cloud interactions, and (2) cloud variations at "moderate" scales and how they may cause natural climate variability, and (3) cloud variations at the largest scales and how they affect the climate. The emphasis in this discussion is on the more mature subject of cloud-radiation interactions. There is now a need to begin similar detailed diagnostic studies of water exchange processes.

  17. Probabilistic discrimination between large-scale environments of intensifying and decaying African Easterly Waves

    Energy Technology Data Exchange (ETDEWEB)

    Agudelo, Paula A. [Area Hidrometria e Instrumentacion Carrera, Empresas Publicas de Medellin, Medellin (Colombia); Hoyos, Carlos D.; Curry, Judith A.; Webster, Peter J. [School of Earth and Atmospheric Sciences, Georgia Institute of Technology, Atlanta, GA (United States)

    2011-04-15

    About 50-60% of Atlantic tropical cyclones (TCs) including nearly 85% of intense hurricanes have their origins as African Easterly Waves (AEWs). However, predicting the likelihood of AEW intensification remains a difficult task. We have developed a Bayesian diagnostic methodology to understand genesis of North Atlantic TCs spawned by AEWs through the examination of the characteristics of the AEW itself together with the large-scale environment, resulting in a probabilistic discrimination between large-scale environments associated with intensifying and decaying AEWs. The methodology is based on a new objective and automatic AEW tracking scheme used for the period 1980 to 2001 based on spatio-temporally Fourier-filtered relative vorticity and meridional winds at different levels and outgoing long wave radiation. Using the AEW and Hurricane Best Track Files (HURDAT) data sets, probability density functions of environmental variables that discriminate between AEWs that decay, become TCs or become major hurricanes are determined. Results indicate that the initial amplitude of the AEWs is a major determinant for TC genesis, and that TC genesis risk increases when the wave enters an environment characterized by pre-existing large-scale convergence and moist convection. For the prediction of genesis, the most useful variables are column integrated heating, vertical velocity and specific humidity, and a combined form of divergence and vertical velocity and SST. It is also found that the state of the large-scale environment modulates the annual cycle and interannual variability of the AEW intensification efficiency. (orig.)

  18. Distributed and hierarchical control techniques for large-scale power plant systems

    International Nuclear Information System (INIS)

    Raju, G.V.S.; Kisner, R.A.

    1985-08-01

    In large-scale systems, integrated and coordinated control functions are required to maximize plant availability, to allow maneuverability through various power levels, and to meet externally imposed regulatory limitations. Nuclear power plants are large-scale systems. Prime subsystems are those that contribute directly to the behavior of the plant's ultimate output. The prime subsystems in a nuclear power plant include reactor, primary and intermediate heat transport, steam generator, turbine generator, and feedwater system. This paper describes and discusses the continuous-variable control system developed to supervise prime plant subsystems for optimal control and coordination

  19. Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis

    Science.gov (United States)

    Chow, Kui Foon; Kennedy, Kerry John

    2014-01-01

    International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…

  20. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  1. Some effects of integrated production planning in large-scale kitchens

    DEFF Research Database (Denmark)

    Engelund, Eva Høy; Friis, Alan; Jacobsen, Peter

    2005-01-01

    Integrated production planning in large-scale kitchens proves advantageous for increasing the overall quality of the food produced and the flexibility in terms of a diverse food supply. The aim is to increase the flexibility and the variability in the production as well as the focus on freshness ...

  2. Variability of Snow Ablation: Consequences for Runoff Generation at the Process Scale and Lessons for Large Cold Regions Catchments

    Science.gov (United States)

    Pomeroy, J. W.; Carey, S. K.; Granger, R. J.; Hedstrom, N. R.; Janowicz, R.; Pietroniro, A.; Quinton, W. L.

    2002-12-01

    The supply of water to large northern catchments such as the Mackenzie and Yukon Rivers is dominated by snowmelt runoff from first order mountain catchments. In order to understand the timing, peak and duration of the snowmelt freshet at larger scale it is important to appreciate the spatial and temporal variability of snowmelt and runoff processes at the source. For this reason a comprehensive hydrology study of a Yukon River headwaters catchment, Wolf Creek Research Basin, near Whitehorse, has focussed on the spatial variability of snow ablation and snowmelt runoff generation and the consequences for the water balance in a mountain tundra zone. In northern mountain tundra, surface energetics vary with receipt of solar radiation, shrub vegetation cover and initial snow accumulation. Therefore the timing of snowmelt is controlled by aspect, in that south facing slopes become snow-free 4-5 weeks before the north facing. Runoff generation differs widely between the slopes; there is normally no spring runoff generated from the south facing slope as all meltwater evaporates or infiltrates. On the north facing slope, snowmelt provides substantial runoff to hillside macropores which rapidly route water to the stream channel. Macropore distribution is associated with organic terrain and discontinuous permafrost, which in turn result from the summer surface energetics. Therefore the influence of small-scale snow redistribution and energetics as controlled by topography must be accounted for when calculating contributing areas to larger scale catchments, and estimating the effectiveness of snowfall in generating streamflow. This concept is quite distinct from the drainage controlled contributing area that has been found useful in temperate-zone hydrology.

  3. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  4. Large-scale circulation departures related to wet episodes in northeast Brazil

    Science.gov (United States)

    Sikdar, D. N.; Elsner, J. B.

    1985-01-01

    Large scale circulation features are presented as related to wet spells over northeast Brazil (Nordeste) during the rainy season (March and April) of 1979. The rainy season season is devided into dry and wet periods, the FGGE and geostationary satellite data was averaged and mean and departure fields of basic variables and cloudiness were studied. Analysis of seasonal mean circulation features show: lowest sea level easterlies beneath upper level westerlies; weak meridional winds; high relative humidity over the Amazon basin and relatively dry conditions over the South Atlantic Ocean. A fluctuation was found in the large scale circulation features on time scales of a few weeks or so over Nordeste and the South Atlantic sector. Even the subtropical High SLP's have large departures during wet episodes, implying a short period oscillation in the Southern Hemisphere Hadley circulation.

  5. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  6. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  7. Variability, trends, and teleconnections of stream flows with large-scale climate signals in the Omo-Ghibe River Basin, Ethiopia.

    Science.gov (United States)

    Degefu, Mekonnen Adnew; Bewket, Woldeamlak

    2017-04-01

    This study assesses variability, trends, and teleconnections of stream flow with large-scale climate signals (global sea surface temperatures (SSTs)) for the Omo-Ghibe River Basin of Ethiopia. Fourteen hydrological indices of variability and extremes were defined from daily stream flow data series and analyzed for two common periods, which are 1972-2006 for 5 stations and 1982-2006 for 15 stations. The Mann-Kendall's test was used to detect trends at 0.05 significance level, and simple correlation analysis was applied to evaluate associations between the selected stream flow indices and SSTs. We found weak and mixed (upward and downward) trend signals for annual and wet (Kiremt) season flows. Indices generated for high-flow (flood) magnitudes showed the same weak trend signals. However, trend tests for flood frequencies and low-flow magnitudes showed little evidences of increasing change. It was also found that El Niño-Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD) are the major anomalies affecting stream flow variability in the Omo-Ghibe Basin. The strongest associations are observed between ENSO/Niño3.4 and the stream flow in August and September, mean Kiremt flow (July-September), and flood frequency (peak over threshold on average three peaks per year (POT3_Fre)). The findings of this study provide a general overview on the long-term stream flow variability and predictability of stream flows for the Omo-Ghibe River Basin.

  8. Identifying the Source of Large-Scale Atmospheric Variability in Jupiter

    Science.gov (United States)

    Orton, Glenn

    2011-01-01

    We propose to use the unique mid-infrared filtered imaging and spectroscopic capabilities of the Subaru COMICS instrument to determine the mechanisms associated with recent unusual rapid albedo and color transformations of several of Jupiter's bands, particularly its South Equatorial Belt (SEB), as a means to understand the coupling between its dynamics and chemistry. These observations will characterize the temperature, degree of cloud cover, and distribution of minor gases that serve as indirect tracers of vertical motions in regions that will be undergoing unusual large-scale changes in dynamics and chemistry: the SEB, as well as regions near the equator and Jupiter's North Temperate Belt. COMICS is ideal for this investigation because of its efficiency in doing both imaging and spectroscopy, its 24.5-mum filter that is unique to 8-meter-class telescopes, its wide field of view that allows imaging of nearly all of Jupiter's disk, coupled with a high diffraction-limited angular resolution and optimal mid-infrared atmospheric transparency.

  9. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  10. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  11. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    Science.gov (United States)

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  12. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    Science.gov (United States)

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  13. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Large-scale circulation departures related to wet episodes in north-east Brazil

    Science.gov (United States)

    Sikdar, Dhirendra N.; Elsner, James B.

    1987-01-01

    Large scale circulation features are presented as related to wet spells over northeast Brazil (Nordeste) during the rainy season (March and April) of 1979. The rainy season is divided into dry and wet periods; the FGGE and geostationary satellite data was averaged; and mean and departure fields of basic variables and cloudiness were studied. Analysis of seasonal mean circulation features show: lowest sea level easterlies beneath upper level westerlies; weak meridional winds; high relative humidity over the Amazon basin and relatively dry conditions over the South Atlantic Ocean. A fluctuation was found in the large scale circulation features on time scales of a few weeks or so over Nordeste and the South Atlantic sector. Even the subtropical High SLPs have large departures during wet episodes, implying a short period oscillation in the Southern Hemisphere Hadley circulation.

  15. Improving seasonal forecasts of hydroclimatic variables through the state of multiple large-scale climate signals

    Science.gov (United States)

    Castelletti, A.; Giuliani, M.; Block, P. J.

    2017-12-01

    Increasingly uncertain hydrologic regimes combined with more frequent and intense extreme events are challenging water systems management worldwide, emphasizing the need of accurate medium- to long-term predictions to timely prompt anticipatory operations. Despite modern forecasts are skillful over short lead time (from hours to days), predictability generally tends to decrease on longer lead times. Global climate teleconnection, such as El Niño Southern Oscillation (ENSO), may contribute in extending forecast lead times. However, ENSO teleconnection is well defined in some locations, such as Western USA and Australia, while there is no consensus on how it can be detected and used in other regions, particularly in Europe, Africa, and Asia. In this work, we generalize the Niño Index Phase Analysis (NIPA) framework by contributing the Multi Variate Niño Index Phase Analysis (MV-NIPA), which allows capturing the state of multiple large-scale climate signals (i.e. ENSO, North Atlantic Oscillation, Pacific Decadal Oscillation, Atlantic Multi-decadal Oscillation, Indian Ocean Dipole) to forecast hydroclimatic variables on a seasonal time scale. Specifically, our approach distinguishes the different phases of the considered climate signals and, for each phase, identifies relevant anomalies in Sea Surface Temperature (SST) that influence the local hydrologic conditions. The potential of the MV-NIPA framework is demonstrated through an application to the Lake Como system, a regulated lake in northern Italy which is mainly operated for flood control and irrigation supply. Numerical results show high correlations between seasonal SST values and one season-ahead precipitation in the Lake Como basin. The skill of the resulting MV-NIPA forecast outperforms the one of ECMWF products. This information represents a valuable contribution to partially anticipate the summer water availability, especially during drought events, ultimately supporting the improvement of the Lake Como

  16. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  17. Local-scale models reveal ecological niche variability in amphibian and reptile communities from two contrasting biogeographic regions

    Directory of Open Access Journals (Sweden)

    Alberto Muñoz

    2016-10-01

    Full Text Available Ecological Niche Models (ENMs are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models. Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species and taxonomy (amphibians and reptiles. Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural

  18. Local-scale models reveal ecological niche variability in amphibian and reptile communities from two contrasting biogeographic regions

    Science.gov (United States)

    Santos, Xavier; Felicísimo, Ángel M.

    2016-01-01

    Ecological Niche Models (ENMs) are widely used to describe how environmental factors influence species distribution. Modelling at a local scale, compared to a large scale within a high environmental gradient, can improve our understanding of ecological species niches. The main goal of this study is to assess and compare the contribution of environmental variables to amphibian and reptile ENMs in two Spanish national parks located in contrasting biogeographic regions, i.e., the Mediterranean and the Atlantic area. The ENMs were built with maximum entropy modelling using 11 environmental variables in each territory. The contributions of these variables to the models were analysed and classified using various statistical procedures (Mann–Whitney U tests, Principal Components Analysis and General Linear Models). Distance to the hydrological network was consistently the most relevant variable for both parks and taxonomic classes. Topographic variables (i.e., slope and altitude) were the second most predictive variables, followed by climatic variables. Differences in variable contribution were observed between parks and taxonomic classes. Variables related to water availability had the larger contribution to the models in the Mediterranean park, while topography variables were decisive in the Atlantic park. Specific response curves to environmental variables were in accordance with the biogeographic affinity of species (Mediterranean and non-Mediterranean species) and taxonomy (amphibians and reptiles). Interestingly, these results were observed for species located in both parks, particularly those situated at their range limits. Our findings show that ecological niche models built at local scale reveal differences in habitat preferences within a wide environmental gradient. Therefore, modelling at local scales rather than assuming large-scale models could be preferable for the establishment of conservation strategies for herptile species in natural parks. PMID

  19. Nearly incompressible fluids: Hydrodynamics and large scale inhomogeneity

    International Nuclear Information System (INIS)

    Hunana, P.; Zank, G. P.; Shaikh, D.

    2006-01-01

    incompressible equations for higher order fluctuation components are derived and it is shown that they converge to the usual homogeneous nearly incompressible equations in the limit of no large-scale background. We use a time and length scale separation procedure to obtain wave equations for the acoustic pressure and velocity perturbations propagating on fast-time-short-wavelength scales. On these scales, the pseudosound relation, used to relate density and pressure fluctuations, is also obtained. In both cases, the speed of propagation (sound speed) depends on background variables and therefore varies spatially. For slow-time scales, a simple pseudosound relation cannot be obtained and density and pressure fluctuations are implicitly related through a relation which can be solved only numerically. Subject to some simplifications, a generalized inhomogeneous pseudosound relation is derived. With this paper, we extend the theory of nearly incompressible hydrodynamics to flows, including the solar wind, which include large-scale inhomogeneities (in this case radially symmetric and in equilibrium)

  20. Assessing millennial-scale variability during the Holocene: A perspective from the western tropical Pacific

    Science.gov (United States)

    Khider, D.; Jackson, C. S.; Stott, L. D.

    2014-03-01

    We investigate the relationship between tropical Pacific and Southern Ocean variability during the Holocene using the stable oxygen isotope and magnesium/calcium records of cooccurring planktonic and benthic foraminifera from a marine sediment core collected in the western equatorial Pacific. The planktonic record exhibits millennial-scale sea surface temperature (SST) oscillations over the Holocene of 0.5°C while the benthic δ18Oc document 0.10‰ millennial-scale changes of Upper Circumpolar Deep Water (UCDW), a water mass which outcrops in the Southern Ocean. Solar forcing as an explanation for millennial-scale SST variability requires (1) a large climate sensitivity and (2) a long 400 year delayed response, suggesting that if solar forcing is the cause of the variability, it would need to be considerably amplified by processes within the climate system at least at the core location. We also explore the possibility that SST variability arose from volcanic forcing using a simple red noise model. Our best estimates of volcanic forcing falls short of reproducing the amplitude of observed SST variations although it produces power at low-frequency similar to that observed in the MD81 record. Although we cannot totally discount the volcanic and solar forcing hypotheses, we are left to consider that the most plausible source for Holocene millennial-scale variability lies within the climate system itself. In particular, UCDW variability coincided with deep North Atlantic changes, indicating a role for the deep ocean in Holocene millennial-scale variability.

  1. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  2. Consistency and Variability in Talk about "Diversity": An Empirical Analysis of Discursive Scope in Swiss Large Scale Enterprises

    Directory of Open Access Journals (Sweden)

    Anja Ostendorp

    2009-02-01

    Full Text Available Traditionally discussions of "diversity" in organizations either refer to an ideal "management" of a diverse workforce or to specific concerns of minorities. The term diversity, however, entails a growing number of translations. Highlighting this diversity of diversity, the concept cannot be merely conceived of as either social-normative or economic-functional. Therefore, the present study empirically scrutinizes the current scope of diversity-talk in Swiss large scale enterprises from a discursive psychological perspective. First, it provides five so-called interpretative repertoires which focus on: image, market, minorities, themes, and difference. Second, it discusses why and how persons oscillate between consistency and variability whenever they draw upon these different repertoires. Finally, it points out possibilities to combine them. This empirical approach to diversity in organizations offers new aspects to the current debate on diversity and introduces crucial concepts of a discursive psychological analysis. URN: urn:nbn:de:0114-fqs090218

  3. Effect of Variable Spatial Scales on USLE-GIS Computations

    Science.gov (United States)

    Patil, R. J.; Sharma, S. K.

    2017-12-01

    Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.

  4. Prediction of spatially variable unsaturated hydraulic conductivity using scaled particle-size distribution functions

    NARCIS (Netherlands)

    Nasta, P.; Romano, N.; Assouline, S; Vrugt, J.A.; Hopmans, J.W.

    2013-01-01

    Simultaneous scaling of soil water retention and hydraulic conductivity functions provides an effective means to characterize the heterogeneity and spatial variability of soil hydraulic properties in a given study area. The statistical significance of this approach largely depends on the number of

  5. Insertion Sequence-Caused Large Scale-Rearrangements in the Genome of Escherichia coli

    Science.gov (United States)

    2016-07-18

    affordable ap- proach to genome-wide characterization of genetic varia - tion in bacterial and eukaryotic genomes (1–3). In addition to small-scale...Paired-End Reads), that uses a graph-based al- gorithm (27) capable of detecting most large-scale varia - tion involving repetitive regions, including novel...Avila,P., Grinsted,J. and De La Cruz,F. (1988) Analysis of the variable endpoints generated by one-ended transposition of Tn21.. J. Bacteriol., 170

  6. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    Science.gov (United States)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dunner, R.; Essinger-Hileman, T.; Eimer, J.; hide

    2015-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe approx.70% of the sky. A variable-delay polarization modulator provides modulation of the polarization at approx.10Hz to suppress the 1/f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  7. Simple Crosscutting Concerns Are Not So Simple : Analysing Variability in Large-Scale Idioms-Based Implementations

    NARCIS (Netherlands)

    Bruntink, M.; Van Deursen, A.; d’Hondt, M.; Tourwé, T.

    2007-01-01

    This paper describes a method for studying idioms-based implementations of crosscutting concerns, and our experiences with it in the context of a real-world, large-scale embedded software system. In particular, we analyse a seemingly simple concern, tracing, and show that it exhibits significant

  8. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  9. A Dynamic Optimization Strategy for the Operation of Large Scale Seawater Reverses Osmosis System

    Directory of Open Access Journals (Sweden)

    Aipeng Jiang

    2014-01-01

    Full Text Available In this work, an efficient strategy was proposed for efficient solution of the dynamic model of SWRO system. Since the dynamic model is formulated by a set of differential-algebraic equations, simultaneous strategies based on collocations on finite element were used to transform the DAOP into large scale nonlinear programming problem named Opt2. Then, simulation of RO process and storage tanks was carried element by element and step by step with fixed control variables. All the obtained values of these variables then were used as the initial value for the optimal solution of SWRO system. Finally, in order to accelerate the computing efficiency and at the same time to keep enough accuracy for the solution of Opt2, a simple but efficient finite element refinement rule was used to reduce the scale of Opt2. The proposed strategy was applied to a large scale SWRO system with 8 RO plants and 4 storage tanks as case study. Computing result shows that the proposed strategy is quite effective for optimal operation of the large scale SWRO system; the optimal problem can be successfully solved within decades of iterations and several minutes when load and other operating parameters fluctuate.

  10. Accounting for Unresolved Spatial Variability in Large Scale Models: Development and Evaluation of a Statistical Cloud Parameterization with Prognostic Higher Order Moments

    Energy Technology Data Exchange (ETDEWEB)

    Robert Pincus

    2011-05-17

    This project focused on the variability of clouds that is present across a wide range of scales ranging from the synoptic to the millimeter. In particular, there is substantial variability in cloud properties at scales smaller than the grid spacing of models used to make climate projections (GCMs) and weather forecasts. These models represent clouds and other small-scale processes with parameterizations that describe how those processes respond to and feed back on the largescale state of the atmosphere.

  11. Real tunneling geometries and the large-scale topology of the universe

    International Nuclear Information System (INIS)

    Gibbons, G.W.; Hartle, J.B.

    1990-01-01

    If the topology and geometry of spacetime are quantum-mechanically variable, then the particular classical large-scale topology and geometry observed in our universe must be statistical predictions of its initial condition. This paper examines the predictions of the ''no boundary'' initial condition for the present large-scale topology and geometry. Finite-action real tunneling solutions of Einstein's equation are important for such predictions. These consist of compact Riemannian (Euclidean) geometries joined to a Lorentzian cosmological geometry across a spacelike surface of vanishing extrinsic curvature. The classification of such solutions is discussed and general constraints on their topology derived. For example, it is shown that, if the Euclidean Ricci tensor is positive, then a real tunneling solution can nucleate only a single connected Lorentzian spacetime (the unique conception theorem). Explicit examples of real tunneling solutions driven by a cosmological constant are exhibited and their implications for cosmic baldness described. It is argued that the most probable large-scale spacetime predicted by the real tunneling solutions of the ''no-boundary'' initial condition has the topology RxS 3 with the de Sitter metric

  12. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  13. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  14. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  15. Numerical Investigation of Multiple-, Interacting-Scale Variable-Density Ground Water Flow Systems

    Science.gov (United States)

    Cosler, D.; Ibaraki, M.

    2004-12-01

    The goal of our study is to elucidate the nonlinear processes that are important for multiple-, interacting-scale flow and solute transport in subsurface environments. In particular, we are focusing on the influence of small-scale instability development on variable-density ground water flow behavior in large-scale systems. Convective mixing caused by these instabilities may mix the fluids to a greater extent than would be the case with classical, Fickian dispersion. Most current numerical schemes for interpreting field-scale variable-density flow systems do not explicitly account for the complexities caused by small-scale instabilities and treat such processes as "lumped" Fickian dispersive mixing. Such approaches may greatly underestimate the mixing behavior and misrepresent the overall large-scale flow field dynamics. The specific objectives of our study are: (i) to develop an adaptive (spatial and temporal scales) three-dimensional numerical model that is fully capable of simulating field-scale variable-density flow systems with fine resolution (~1 cm); and (ii) to evaluate the importance of scale-dependent process interactions by performing a series of simulations on different problem scales ranging from laboratory experiments to field settings, including an aquifer storage and freshwater recovery (ASR) system similar to those planned for the Florida Everglades and in-situ contaminant remediation systems. We are examining (1) methods to create instabilities in field-scale systems, (2) porous media heterogeneity effects, and (3) the relation between heterogeneity characteristics (e.g., permeability variance and correlation length scales) and the mixing scales that develop for varying degrees of unstable stratification. Applications of our work include the design of new water supply and conservation measures (e.g., ASR systems), assessment of saltwater intrusion problems in coastal aquifers, and the design of in-situ remediation systems for aquifer restoration

  16. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  17. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  18. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  19. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  20. Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.

    Science.gov (United States)

    Chen, Rong; Nixon, Erika; Herskovits, Edward

    2016-04-01

    Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.

  1. Large-scale spatial distribution patterns of gastropod assemblages in rocky shores.

    Directory of Open Access Journals (Sweden)

    Patricia Miloslavich

    Full Text Available Gastropod assemblages from nearshore rocky habitats were studied over large spatial scales to (1 describe broad-scale patterns in assemblage composition, including patterns by feeding modes, (2 identify latitudinal pattern of biodiversity, i.e., richness and abundance of gastropods and/or regional hotspots, and (3 identify potential environmental and anthropogenic drivers of these assemblages. Gastropods were sampled from 45 sites distributed within 12 Large Marine Ecosystem regions (LME following the NaGISA (Natural Geography in Shore Areas standard protocol (www.nagisa.coml.org. A total of 393 gastropod taxa from 87 families were collected. Eight of these families (9.2% appeared in four or more different LMEs. Among these, the Littorinidae was the most widely distributed (8 LMEs followed by the Trochidae and the Columbellidae (6 LMEs. In all regions, assemblages were dominated by few species, the most diverse and abundant of which were herbivores. No latitudinal gradients were evident in relation to species richness or densities among sampling sites. Highest diversity was found in the Mediterranean and in the Gulf of Alaska, while highest densities were found at different latitudes and represented by few species within one genus (e.g. Afrolittorina in the Agulhas Current, Littorina in the Scotian Shelf, and Lacuna in the Gulf of Alaska. No significant correlation was found between species composition and environmental variables (r≤0.355, p>0.05. Contributing variables to this low correlation included invasive species, inorganic pollution, SST anomalies, and chlorophyll-a anomalies. Despite data limitations in this study which restrict conclusions in a global context, this work represents the first effort to sample gastropod biodiversity on rocky shores using a standardized protocol across a wide scale. Our results will generate more work to build global databases allowing for large-scale diversity comparisons of rocky intertidal assemblages.

  2. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  3. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  4. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    Science.gov (United States)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  5. Small-scale variability in tropical tropopause layer humidity

    Science.gov (United States)

    Jensen, E. J.; Ueyama, R.; Pfister, L.; Karcher, B.; Podglajen, A.; Diskin, G. S.; DiGangi, J. P.; Thornberry, T. D.; Rollins, A. W.; Bui, T. V.; Woods, S.; Lawson, P.

    2016-12-01

    Recent advances in statistical parameterizations of cirrus cloud processes for use in global models are highlighting the need for information about small-scale fluctuations in upper tropospheric humidity and the physical processes that control the humidity variability. To address these issues, we have analyzed high-resolution airborne water vapor measurements obtained in the Airborne Tropical TRopopause EXperiment over the tropical Pacific between 14 and 20 km. Using accurate and precise 1-Hz water vapor measurements along approximately-level aircraft flight legs, we calculate structure functions spanning horizontal scales ranging from about 0.2 to 50 km, and we compare the water vapor variability in the lower (about 14 km) and upper (16-19 km) Tropical Tropopause Layer (TTL). We also compare the magnitudes and scales of variability inside TTL cirrus versus in clear-sky regions. The measurements show that in the upper TTL, water vapor concentration variance is stronger inside cirrus than in clear-sky regions. Using simulations of TTL cirrus formation, we show that small variability in clear-sky humidity is amplified by the strong sensitivity of ice nucleation rate to supersaturation, which results in highly-structured clouds that subsequently drive variability in the water vapor field. In the lower TTL, humidity variability is correlated with recent detrainment from deep convection. The structure functions indicate approximately power-law scaling with spectral slopes ranging from about -5/3 to -2.

  6. Scales of snow depth variability in high elevation rangeland sagebrush

    Science.gov (United States)

    Tedesche, Molly E.; Fassnacht, Steven R.; Meiman, Paul J.

    2017-09-01

    In high elevation semi-arid rangelands, sagebrush and other shrubs can affect transport and deposition of wind-blown snow, enabling the formation of snowdrifts. Datasets from three field experiments were used to investigate the scales of spatial variability of snow depth around big mountain sagebrush ( Artemisia tridentata Nutt.) at a high elevation plateau rangeland in North Park, Colorado, during the winters of 2002, 2003, and 2008. Data were collected at multiple resolutions (0.05 to 25 m) and extents (2 to 1000 m). Finer scale data were collected specifically for this study to examine the correlation between snow depth, sagebrush microtopography, the ground surface, and the snow surface, as well as the temporal consistency of snow depth patterns. Variograms were used to identify the spatial structure and the Moran's I statistic was used to determine the spatial correlation. Results show some temporal consistency in snow depth at several scales. Plot scale snow depth variability is partly a function of the nature of individual shrubs, as there is some correlation between the spatial structure of snow depth and sagebrush, as well as between the ground and snow depth. The optimal sampling resolution appears to be 25-cm, but over a large area, this would require a multitude of samples, and thus a random stratified approach is recommended with a fine measurement resolution of 5-cm.

  7. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  8. Governance of extended lifecycle in large-scale eHealth initiatives: analyzing variability of enterprise architecture elements.

    Science.gov (United States)

    Mykkänen, Juha; Virkanen, Hannu; Tuomainen, Mika

    2013-01-01

    The governance of large eHealth initiatives requires traceability of many requirements and design decisions. We provide a model which we use to conceptually analyze variability of several enterprise architecture (EA) elements throughout the extended lifecycle of development goals using interrelated projects related to the national ePrescription in Finland.

  9. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  10. Small-scale variability in peatland pore-water biogeochemistry, Hudson Bay Lowland, Canada.

    Science.gov (United States)

    Ulanowski, T A; Branfireun, B A

    2013-06-01

    The Hudson Bay Lowland (HBL) of northern Ontario, Manitoba and Quebec, Canada is the second largest contiguous peatland complex in the world, currently containing more than half of Canada's soil carbon. Recent concerns about the ecohydrological impacts to these large northern peatlands resulting from climate change and resource extraction have catalyzed a resurgence in scientific research into this ecologically important region. However, the sheer size, heterogeneity and elaborate landscape arrangements of this ecosystem raise important questions concerning representative sampling of environmental media for chemical or physical characterization. To begin to quantify such variability, this study assessed the small-scale spatial (1m) and short temporal (21 day) variability of surface pore-water biogeochemistry (pH, dissolved organic carbon, and major ions) in a Sphagnum spp.-dominated, ombrotrophic raised bog, and a Carex spp.-dominated intermediate fen in the HBL. In general, pore-water pH and concentrations of dissolved solutes were similar to previously reported literature values from this region. However, systematic sampling revealed consistent statistically significant differences in pore-water chemistries between the bog and fen peatland types, and large within-site spatiotemporal variability. We found that microtopography in the bog was associated with consistent differences in most biogeochemical variables. Temporal changes in dissolved solute chemistry, particularly base cations (Na(+), Ca(2+) and Mg(2+)), were statistically significant in the intermediate fen, likely a result of a dynamic connection between surficial waters and mineral-rich deep groundwater. In both the bog and fen, concentrations of SO4(2-) showed considerable spatial variability, and a significant decrease in concentrations over the study period. The observed variability in peatland pore-water biogeochemistry over such small spatial and temporal scales suggests that under-sampling in

  11. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    Science.gov (United States)

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  12. Expected Future Conditions for Secure Power Operation with Large Scale of RES Integration

    International Nuclear Information System (INIS)

    Majstrovic, G.; Majstrovic, M.; Sutlovic, E.

    2015-01-01

    EU energy strategy is strongly focused on the large scale integration of renewable energy sources. The most dominant part here is taken by variable sources - wind power plants. Grid integration of intermittent sources along with keeping the system stable and secure is one of the biggest challenges for the TSOs. This part is often neglected by the energy policy makers, so this paper deals with expected future conditions for secure power system operation with large scale wind integration. It gives an overview of expected wind integration development in EU, as well as expected P/f regulation and control needs. The paper is concluded with several recommendations. (author).

  13. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  14. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  15. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  16. Global analysis of seagrass restoration: the importance of large-scale planting

    KAUST Repository

    van Katwijk, Marieke M.; Thorhaug, Anitra; Marbà , Nú ria; Orth, Robert J.; Duarte, Carlos M.; Kendrick, Gary A.; Althuizen, Inge H. J.; Balestri, Elena; Bernard, Guillaume; Cambridge, Marion L.; Cunha, Alexandra; Durance, Cynthia; Giesen, Wim; Han, Qiuying; Hosokawa, Shinya; Kiswara, Wawan; Komatsu, Teruhisa; Lardicci, Claudio; Lee, Kun-Seop; Meinesz, Alexandre; Nakaoka, Masahiro; O'Brien, Katherine R.; Paling, Erik I.; Pickerell, Chris; Ransijn, Aryan M. A.; Verduin, Jennifer J.

    2015-01-01

    increases trial survival - large numbers ensure the spread of risks, which is needed to overcome high natural variability. Secondly, a large-scale trial increases population growth rate by enhancing self-sustaining feedback, which is generally found in foundation species in stressful environments such as seagrass beds. Thus, by careful site selection and applying appropriate techniques, spreading of risks and enhancing self-sustaining feedback in concert increase success of seagrass restoration. For effective restoration of seagrass foundation species in its typically dynamic, stressful environment, introduction of large numbers is seen to be beneficial and probably serves two purposes. First, a large-scale planting increases trial survival - large numbers ensure the spread of risks, which is needed to overcome high natural variability. Secondly, a large-scale trial increases population growth rate by enhancing self-sustaining feedback, which is generally found in foundation species in stressful environments such as seagrass beds. Thus, by careful site selection and applying appropriate techniques, spreading of risks and enhancing self-sustaining feedback in concert increase success of seagrass restoration. Journal of Applied Ecology © 2016 British Ecological Society.

  17. Global analysis of seagrass restoration: the importance of large-scale planting

    KAUST Repository

    van Katwijk, Marieke M.

    2015-10-28

    increases trial survival - large numbers ensure the spread of risks, which is needed to overcome high natural variability. Secondly, a large-scale trial increases population growth rate by enhancing self-sustaining feedback, which is generally found in foundation species in stressful environments such as seagrass beds. Thus, by careful site selection and applying appropriate techniques, spreading of risks and enhancing self-sustaining feedback in concert increase success of seagrass restoration. For effective restoration of seagrass foundation species in its typically dynamic, stressful environment, introduction of large numbers is seen to be beneficial and probably serves two purposes. First, a large-scale planting increases trial survival - large numbers ensure the spread of risks, which is needed to overcome high natural variability. Secondly, a large-scale trial increases population growth rate by enhancing self-sustaining feedback, which is generally found in foundation species in stressful environments such as seagrass beds. Thus, by careful site selection and applying appropriate techniques, spreading of risks and enhancing self-sustaining feedback in concert increase success of seagrass restoration. Journal of Applied Ecology © 2016 British Ecological Society.

  18. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  19. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  20. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  1. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    Science.gov (United States)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  2. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  3. A suite of global, cross-scale topographic variables for environmental and biodiversity modeling

    Science.gov (United States)

    Amatulli, Giuseppe; Domisch, Sami; Tuanmu, Mao-Ning; Parmentier, Benoit; Ranipeta, Ajay; Malczyk, Jeremy; Jetz, Walter

    2018-03-01

    Topographic variation underpins a myriad of patterns and processes in hydrology, climatology, geography and ecology and is key to understanding the variation of life on the planet. A fully standardized and global multivariate product of different terrain features has the potential to support many large-scale research applications, however to date, such datasets are unavailable. Here we used the digital elevation model products of global 250 m GMTED2010 and near-global 90 m SRTM4.1dev to derive a suite of topographic variables: elevation, slope, aspect, eastness, northness, roughness, terrain roughness index, topographic position index, vector ruggedness measure, profile/tangential curvature, first/second order partial derivative, and 10 geomorphological landform classes. We aggregated each variable to 1, 5, 10, 50 and 100 km spatial grains using several aggregation approaches. While a cross-correlation underlines the high similarity of many variables, a more detailed view in four mountain regions reveals local differences, as well as scale variations in the aggregated variables at different spatial grains. All newly-developed variables are available for download at Data Citation 1 and for download and visualization at http://www.earthenv.org/topography.

  4. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  5. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  6. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  7. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    Science.gov (United States)

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  8. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  9. Small-scale open ocean currents have large effects on wind wave heights

    Science.gov (United States)

    Ardhuin, Fabrice; Gille, Sarah T.; Menemenlis, Dimitris; Rocha, Cesar B.; Rascle, Nicolas; Chapron, Bertrand; Gula, Jonathan; Molemaker, Jeroen

    2017-06-01

    Tidal currents and large-scale oceanic currents are known to modify ocean wave properties, causing extreme sea states that are a hazard to navigation. Recent advances in the understanding and modeling capability of open ocean currents have revealed the ubiquitous presence of eddies, fronts, and filaments at scales 10-100 km. Based on realistic numerical models, we show that these structures can be the main source of variability in significant wave heights at scales less than 200 km, including important variations down to 10 km. Model results are consistent with wave height variations along satellite altimeter tracks, resolved at scales larger than 50 km. The spectrum of significant wave heights is found to be of the order of 70>>2/>(g2>>2>) times the current spectrum, where >> is the spatially averaged significant wave height, >> is the energy-averaged period, and g is the gravity acceleration. This variability induced by currents has been largely overlooked in spite of its relevance for extreme wave heights and remote sensing.Plain Language SummaryWe show that the variations in currents at scales 10 to 100 km are the main source of variations in wave heights at the same scales. Our work uses a combination of realistic numerical models for currents and waves and data from the Jason-3 and SARAL/AltiKa satellites. This finding will be of interest for the investigation of extreme wave heights, remote sensing, and air-sea interactions. As an immediate application, the present results will help constrain the error budget of the up-coming satellite missions, in particular the Surface Water and Ocean Topography (SWOT) mission, and decide how the data will have to be processed to arrive at accurate sea level and wave measurements. It will also help in the analysis of wave measurements by the CFOSAT satellite.

  10. Scaling laws for perturbations in the ocean-atmosphere system following large CO2 emissions

    Science.gov (United States)

    Towles, N.; Olson, P.; Gnanadesikan, A.

    2015-07-01

    Scaling relationships are found for perturbations to atmosphere and ocean variables from large transient CO2 emissions. Using the Long-term Ocean-atmosphere-Sediment CArbon cycle Reservoir (LOSCAR) model (Zeebe et al., 2009; Zeebe, 2012b), we calculate perturbations to atmosphere temperature, total carbon, ocean temperature, total ocean carbon, pH, alkalinity, marine-sediment carbon, and carbon-13 isotope anomalies in the ocean and atmosphere resulting from idealized CO2 emission events. The peak perturbations in the atmosphere and ocean variables are then fit to power law functions of the form of γ DαEβ, where D is the event duration, E is its total carbon emission, and γ is a coefficient. Good power law fits are obtained for most system variables for E up to 50 000 PgC and D up to 100 kyr. Although all of the peak perturbations increase with emission rate E/D, we find no evidence of emission-rate-only scaling, α + β = 0. Instead, our scaling yields α + β ≃ 1 for total ocean and atmosphere carbon and 0 < α + β < 1 for most of the other system variables.

  11. Scaling laws for perturbations in the ocean–atmosphere system following large CO2 emissions

    Directory of Open Access Journals (Sweden)

    N. Towles

    2015-07-01

    Full Text Available Scaling relationships are found for perturbations to atmosphere and ocean variables from large transient CO2 emissions. Using the Long-term Ocean-atmosphere-Sediment CArbon cycle Reservoir (LOSCAR model (Zeebe et al., 2009; Zeebe, 2012b, we calculate perturbations to atmosphere temperature, total carbon, ocean temperature, total ocean carbon, pH, alkalinity, marine-sediment carbon, and carbon-13 isotope anomalies in the ocean and atmosphere resulting from idealized CO2 emission events. The peak perturbations in the atmosphere and ocean variables are then fit to power law functions of the form of γ DαEβ, where D is the event duration, E is its total carbon emission, and γ is a coefficient. Good power law fits are obtained for most system variables for E up to 50 000 PgC and D up to 100 kyr. Although all of the peak perturbations increase with emission rate E/D, we find no evidence of emission-rate-only scaling, α + β = 0. Instead, our scaling yields α + β ≃ 1 for total ocean and atmosphere carbon and 0 < α + β < 1 for most of the other system variables.

  12. A global classification of coastal flood hazard climates associated with large-scale oceanographic forcing.

    Science.gov (United States)

    Rueda, Ana; Vitousek, Sean; Camus, Paula; Tomás, Antonio; Espejo, Antonio; Losada, Inigo J; Barnard, Patrick L; Erikson, Li H; Ruggiero, Peter; Reguero, Borja G; Mendez, Fernando J

    2017-07-11

    Coastal communities throughout the world are exposed to numerous and increasing threats, such as coastal flooding and erosion, saltwater intrusion and wetland degradation. Here, we present the first global-scale analysis of the main drivers of coastal flooding due to large-scale oceanographic factors. Given the large dimensionality of the problem (e.g. spatiotemporal variability in flood magnitude and the relative influence of waves, tides and surge levels), we have performed a computer-based classification to identify geographical areas with homogeneous climates. Results show that 75% of coastal regions around the globe have the potential for very large flooding events with low probabilities (unbounded tails), 82% are tide-dominated, and almost 49% are highly susceptible to increases in flooding frequency due to sea-level rise.

  13. A KPI-based process monitoring and fault detection framework for large-scale processes.

    Science.gov (United States)

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang

    2017-05-01

    Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. A model for AGN variability on multiple time-scales

    Science.gov (United States)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  15. Normal variability of children's scaled scores on subtests of the Dutch Wechsler Preschool and Primary scale of Intelligence - third edition.

    Science.gov (United States)

    Hurks, P P M; Hendriksen, J G M; Dek, J E; Kooij, A P

    2013-01-01

    Intelligence tests are included in millions of assessments of children and adults each year (Watkins, Glutting, & Lei, 2007a , Applied Neuropsychology, 14, 13). Clinicians often interpret large amounts of subtest scatter, or large differences between the highest and lowest scaled subtest scores, on an intelligence test battery as an index for abnormality or cognitive impairment. The purpose of the present study is to characterize "normal" patterns of variability among subtests of the Dutch Wechsler Preschool and Primary Scale of Intelligence - Third Edition (WPPSI-III-NL; Wechsler, 2010 ). Therefore, the frequencies of WPPSI-III-NL scaled subtest scatter were reported for 1039 healthy children aged 4:0-7:11 years. Results indicated that large differences between highest and lowest scaled subtest scores (or subtest scatter) were common in this sample. Furthermore, degree of subtest scatter was related to: (a) the magnitude of the highest scaled subtest score, i.e., more scatter was seen in children with the highest WPPSI-III-NL scaled subtest scores, (b) Full Scale IQ (FSIQ) scores, i.e., higher FSIQ scores were associated with an increase in subtest scatter, and (c) sex differences, with boys showing a tendency to display more scatter than girls. In conclusion, viewing subtest scatter as an index for abnormality in WPPSI-III-NL scores is an oversimplification as this fails to recognize disparate subtest heterogeneity that occurs within a population of healthy children aged 4:0-7:11 years.

  16. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.

    2015-01-01

    from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  17. Effect of small-scale ionospheric variability on GNSS radio occultation data quality

    Science.gov (United States)

    Verkhoglyadova, O. P.; Mannucci, A. J.; Ao, C. O.; Iijima, B. A.; Kursinski, E. R.

    2015-09-01

    Global Navigation Satellite Systems (GNSS) radio occultation (RO) measurements are sensitive to thin ionization layers and small-scale ionosphere structures. To evaluate error bounds and possible biases in atmospheric retrievals, we characterized ionospheric irregularities encountered in the affected profiles by analyzing the L1 signal-to-noise ratio (SNR) variability at E layer altitudes (from 90 km to 130 km). New metrics to analyze statistical effects of small-scale ionospheric irregularities on refractivity retrievals are proposed. We analyzed refractivity (N) retrievals with Constellation Observing System for Meteorology, Ionosphere and Climate (COSMIC) ROs in 2011. Using refractivity from European Centre for Medium-Range Weather Forecasts (ECMWF) analysis (NECMWF) as the reference data set, we studied statistical properties of the fractional refractivity bias (ΔN) defined by the difference (NECMWF - N)/NECMWF and averaged in the altitude range from 20 to 25 km for each individual profile. We found that (1) persistently larger variability of the L1 SNR as measured by the interquartile range (IQR) existed when the occultation tangent point was in the 90 km to 110 km altitude range than at higher E layer altitudes; (2) the upper limits on the fractional refractivity bias for COSMIC ROs are 0.06% (for daytime local time), 0.1% (for nighttime local time), and ~0.01% (for all local times); (3) distributions of ΔN are non-Gaussian (leptokurtic); (4) latitudinal distributions of small and large ΔN for different levels of ionospheric variability show large tails (NECMWF > N) occurring around the Himalaya and the Andes regions, which are possibly due to biases in ECMWF analysis. We conclude that the refractivity bias due to small-scale irregularities is small below 25 km altitude and can be neglected.

  18. Large-scale geographic variation in distribution and abundance of Australian deep-water kelp forests.

    Directory of Open Access Journals (Sweden)

    Ezequiel M Marzinelli

    Full Text Available Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV facility of Australia's Integrated Marine Observing System (IMOS to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km and depths (15-60 m across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.

  19. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  20. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  1. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  2. Thermal anchoring of wires in large scale superconducting coil test experiment

    International Nuclear Information System (INIS)

    Patel, Dipak; Sharma, A.N.; Prasad, Upendra; Khristi, Yohan; Varmora, Pankaj; Doshi, Kalpesh; Pradhan, S.

    2013-01-01

    Highlights: • We addressed how thermal anchoring in large scale coil test is different compare to small cryogenic apparatus? • We did precise estimation of thermal anchoring length at 77 K and 4.2 K heat sink in large scale superconducting coil test experiment. • We addressed, the quality of anchoring without covering entire wires using Kapton/Teflon tape. • We obtained excellent results in temperature measurement without using GE Varnish by doubling estimated anchoring length. -- Abstract: Effective and precise thermal anchoring of wires in cryogenic experiment is mandatory to measure temperature in milikelvin accuracy and to avoid unnecessary cooling power due to additional heat conduction from room temperature (RT) to operating temperature (OT) through potential, field, displacement and stress measurement instrumentation wires. Instrumentation wires used in large scale superconducting coil test experiments are different compare to cryogenic apparatus in terms of unique construction and overall diameter/area due to errorless measurement in large time-varying magnetic field compare to small cryogenic apparatus, often shielded wires are used. Hence, along with other variables, anchoring techniques and required thermal anchoring length are entirely different in this experiment compare to cryogenic apparatus. In present paper, estimation of thermal anchoring length of five different types of instrumentation wires used in coils test campaign at Institute for Plasma Research (IPR), India has been discussed and some temperature measurement results of coils test campaign have been presented

  3. Impacts of large-scale climatic disturbances on the terrestrial carbon cycle

    Directory of Open Access Journals (Sweden)

    Lucht Wolfgang

    2006-07-01

    Full Text Available Abstract Background The amount of carbon dioxide in the atmosphere steadily increases as a consequence of anthropogenic emissions but with large interannual variability caused by the terrestrial biosphere. These variations in the CO2 growth rate are caused by large-scale climate anomalies but the relative contributions of vegetation growth and soil decomposition is uncertain. We use a biogeochemical model of the terrestrial biosphere to differentiate the effects of temperature and precipitation on net primary production (NPP and heterotrophic respiration (Rh during the two largest anomalies in atmospheric CO2 increase during the last 25 years. One of these, the smallest atmospheric year-to-year increase (largest land carbon uptake in that period, was caused by global cooling in 1992/93 after the Pinatubo volcanic eruption. The other, the largest atmospheric increase on record (largest land carbon release, was caused by the strong El Niño event of 1997/98. Results We find that the LPJ model correctly simulates the magnitude of terrestrial modulation of atmospheric carbon anomalies for these two extreme disturbances. The response of soil respiration to changes in temperature and precipitation explains most of the modelled anomalous CO2 flux. Conclusion Observed and modelled NEE anomalies are in good agreement, therefore we suggest that the temporal variability of heterotrophic respiration produced by our model is reasonably realistic. We therefore conclude that during the last 25 years the two largest disturbances of the global carbon cycle were strongly controlled by soil processes rather then the response of vegetation to these large-scale climatic events.

  4. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  5. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  6. Variability in population abundance is associated with thresholds between scaling regimes

    Science.gov (United States)

    Wardwell, D.; Allen, Craig R.

    2009-01-01

    Discontinuous structure in landscapes may result in discontinuous, aggregated species body-mass patterns, reflecting the scales of structure available to animal communities within a landscape. The edges of these body-mass aggregations reflect transitions between available scales of landscape structure. Such transitions, or scale breaks, are theoretically associated with increased biological variability. We hypothesized that variability in population abundance is greater in animal species near the edge of body-mass aggregations than it is in species that are situated in the interior of body-mass aggregations. We tested this hypothesis by examining both temporal and spatial variability in the abundance of species in the bird community of the Florida Everglades sub-ecoregion, USA. Analyses of both temporal and spatial variability in population abundance supported our hypothesis. Our results indicate that variability within complex systems may be non-random, and is heightened where transitions in scales of process and structure occur. This is the first explicit test of the hypothetical relationship between increased population variability and scale breaks. ?? 2009 by the author(s).

  7. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  8. Large Scale Cosmological Anomalies and Inhomogeneous Dark Energy

    Directory of Open Access Journals (Sweden)

    Leandros Perivolaropoulos

    2014-01-01

    Full Text Available A wide range of large scale observations hint towards possible modifications on the standard cosmological model which is based on a homogeneous and isotropic universe with a small cosmological constant and matter. These observations, also known as “cosmic anomalies” include unexpected Cosmic Microwave Background perturbations on large angular scales, large dipolar peculiar velocity flows of galaxies (“bulk flows”, the measurement of inhomogenous values of the fine structure constant on cosmological scales (“alpha dipole” and other effects. The presence of the observational anomalies could either be a large statistical fluctuation in the context of ΛCDM or it could indicate a non-trivial departure from the cosmological principle on Hubble scales. Such a departure is very much constrained by cosmological observations for matter. For dark energy however there are no significant observational constraints for Hubble scale inhomogeneities. In this brief review I discuss some of the theoretical models that can naturally lead to inhomogeneous dark energy, their observational constraints and their potential to explain the large scale cosmic anomalies.

  9. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  10. Quantifying streamflow change caused by forest disturbance at a large spatial scale: A single watershed study

    Science.gov (United States)

    Wei, Xiaohua; Zhang, Mingfang

    2010-12-01

    Climatic variability and forest disturbance are commonly recognized as two major drivers influencing streamflow change in large-scale forested watersheds. The greatest challenge in evaluating quantitative hydrological effects of forest disturbance is the removal of climatic effect on hydrology. In this paper, a method was designed to quantify respective contributions of large-scale forest disturbance and climatic variability on streamflow using the Willow River watershed (2860 km2) located in the central part of British Columbia, Canada. Long-term (>50 years) data on hydrology, climate, and timber harvesting history represented by equivalent clear-cutting area (ECA) were available to discern climatic and forestry influences on streamflow by three steps. First, effective precipitation, an integrated climatic index, was generated by subtracting evapotranspiration from precipitation. Second, modified double mass curves were developed by plotting accumulated annual streamflow against annual effective precipitation, which presented a much clearer picture of the cumulative effects of forest disturbance on streamflow following removal of climatic influence. The average annual streamflow changes that were attributed to forest disturbances and climatic variability were then estimated to be +58.7 and -72.4 mm, respectively. The positive (increasing) and negative (decreasing) values in streamflow change indicated opposite change directions, which suggest an offsetting effect between forest disturbance and climatic variability in the study watershed. Finally, a multivariate Autoregressive Integrated Moving Average (ARIMA) model was generated to establish quantitative relationships between accumulated annual streamflow deviation attributed to forest disturbances and annual ECA. The model was then used to project streamflow change under various timber harvesting scenarios. The methodology can be effectively applied to any large-scale single watershed where long-term data (>50

  11. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  12. Internal variability of fine-scale components of meteorological fields in extended-range limited-area model simulations with atmospheric and surface nudging

    Science.gov (United States)

    Separovic, Leo; Husain, Syed Zahid; Yu, Wei

    2015-09-01

    Internal variability (IV) in dynamical downscaling with limited-area models (LAMs) represents a source of error inherent to the downscaled fields, which originates from the sensitive dependence of the models to arbitrarily small modifications. If IV is large it may impose the need for probabilistic verification of the downscaled information. Atmospheric spectral nudging (ASN) can reduce IV in LAMs as it constrains the large-scale components of LAM fields in the interior of the computational domain and thus prevents any considerable penetration of sensitively dependent deviations into the range of large scales. Using initial condition ensembles, the present study quantifies the impact of ASN on IV in LAM simulations in the range of fine scales that are not controlled by spectral nudging. Four simulation configurations that all include strong ASN but differ in the nudging settings are considered. In the fifth configuration, grid nudging of land surface variables toward high-resolution surface analyses is applied. The results show that the IV at scales larger than 300 km can be suppressed by selecting an appropriate ASN setup. At scales between 300 and 30 km, however, in all configurations, the hourly near-surface temperature, humidity, and winds are only partly reproducible. Nudging the land surface variables is found to have the potential to significantly reduce IV, particularly for fine-scale temperature and humidity. On the other hand, hourly precipitation accumulations at these scales are generally irreproducible in all configurations, and probabilistic approach to downscaling is therefore recommended.

  13. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  14. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  15. Variability of cold season surface air temperature over northeastern China and its linkage with large-scale atmospheric circulations

    Science.gov (United States)

    Zhuang, Yuanhuang; Zhang, Jingyong; Wang, Lin

    2018-05-01

    Cold temperature anomalies and extremes have profound effects on the society, the economy, and the environment of northeastern China (NEC). In this study, we define the cold season as the months from October to April, and investigate the variability of cold season surface air temperature (CSAT) over NEC and its relationships with large-scale atmospheric circulation patterns for the period 1981-2014. The empirical orthogonal function (EOF) analysis shows that the first EOF mode of the CSAT over NEC is characterized by a homogeneous structure that describes 92.2% of the total variance. The regionally averaged CSAT over NEC is closely linked with the Arctic Oscillation ( r = 0.62, 99% confidence level) and also has a statistically significant relation with the Polar/Eurasian pattern in the cold season. The positive phases of the Arctic Oscillation and the Polar/Eurasian pattern tend to result in a positive geopotential height anomaly over NEC and a weakened East Asian winter monsoon, which subsequently increase the CSAT over NEC by enhancing the downward solar radiation, strengthening the subsidence warming and warm air advection. Conversely, the negative phases of these two climate indices result in opposite regional atmospheric circulation anomalies and decrease the CSAT over NEC.

  16. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  17. Interannual Variability of the Meridional Width of the Baiu Rainband in June and the Associated Large-Scale Atmospheric Circulations

    Science.gov (United States)

    Tsuji, K.; Tomita, T.

    2016-12-01

    Baiu front, which is defined as a boundary between tropical and polar air masses in the East Asia-western North Pacific sector in boreal early summer, slowly migrates northward with the daily meridional swings. Thus, the interannual variability of meridional width of the baiu rainband reflects the slow northward migration and the daily meridional swings of the baiu front. This study focuses on the meridional width of baiu rainband only in June when the baiu front extends on Japan, and investigates how the width is related to the rainfall of Japan with discussions of associated anomalous large-scale atmospheric circulations. The meridional width of baiu rainband is defined based on the monthly-mean precipitation rate of June, whose threshold is 5mm day-1 that is averaged in 130°-150°E. There is a significant positive correlation between the variations of southern and northern edges of the baiu rainband in June. However, the interannual variance of the southern edge is almost twice larger than that of the northern one. That is, the interannual variability of the meridional width is chiefly caused by the variations of southern edge, and the contribution of northern ones is small. When the meridonal width is narrow (wide), an anomalous anticyclonic (cyclonic) circulation appears to the south of Japan, and the precipitation rate increases (decreases) in the western part of Japan while decreases (increases) in the counterpart. In other words, a local dipole with a node at 140°E appears around Japan in the baiu rainfall anomalies. The anomalous anticyclonic (cyclonic) circulation to the south of Japan, which controls the interannual variability of meridional width of the baiu rainband, is induced by the strength of Indian summer monsoon. When the convective activity of Indian summer monsoon is strong (week), the Tibetan high in the upper troposphere extends more (less) eastward. The induced stronger (weaker) descent leads stronger (weaker) Bonin high in the western

  18. Large-scale influences in near-wall turbulence.

    Science.gov (United States)

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  19. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  20. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  1. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  2. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  3. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    Science.gov (United States)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2011-08-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the

  4. Seasonal Scale Convective-Stratiform Pricipitation Variabilities at Tropics

    Science.gov (United States)

    S, Sreekanth T.

    begin{center} Large Seasonal Scale Convective-Stratiform Pricipitation Variabilities at Tropics Sreekanth T S*, Suby Symon*, G. Mohan Kumar (1) and V Sasi Kumar (2) *Centre for Earth Science Studies, Akkulam, Thiruvananthapuram (1) D-330, Swathi Nagar, West Fort, Thiruvananthapuram 695023 (2) 32. NCC Nagar Peroorkada, Thiruvananthapuram ABSTRACT This study investigates the variabilities of convective and stratiform rainfall from 2011 to 2013 at a tropical coastal station in three seasons viz Pre-Monsoon (March-May), Monsoon (June-September) and Post-Monsoon (October-December). Understanding the climatological variability of these two dominant forms of precipitation and their implications in the total rainfall were the main objectives of this investigation. Variabilities in the frequency & duration of events, rain rate & total number of rain drops distribution in different events and the accumulated amount of rain water were analysed. Based on the ground & radar observations from optical & impact disdrometers, Micro Rain Radar and Atmospheric Electric Field Mill, precipitation events were classified into convective and stratiform in three seasons. Classification was done by the method followed by Testud et al (2001) and as an additional information electrical behaviour of clouds from Atmospheric Electric Field Mill is also used. Events which could not be included in both types were termed as 'mixed precipitation' and were included separately. Diurnal variability of the total rainfall in each seasons were also examined. For both convective and stratiform rainfall there exist distinct day-night differences. During nocturnal hours convective rain draged more attention. In all seasons almost 70% of rain duration and 60% of rain events of convective origin were confined to nocturnal hours. But stratiform rain was not affected by diurnal variations greatly because night time occurrences of stratiform duration and events were less than 50%. Also in Monsoon above 35% of

  5. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  6. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  7. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    Science.gov (United States)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  8. Cosmological perturbations from quantum fluctuations to large scale structure

    International Nuclear Information System (INIS)

    Bardeen, J.M.

    1988-01-01

    Classical perturbation theory is developed from the 3 + 1 form of the Einstein equations. A somewhat unusual form of the perturbation equations in the synchronous gauge is recommended for carrying out computations, but interpretation is based on certain hypersurface-invariant combinations of the variables. The formalism is used to analyze the origin of density perturbations from quantum fluctuations during inflation, with particular emphasis on dealing with 'double inflation' and deviations from the Zel'dovich spectrum. The evolution of the density perturbation to the present gives the final density perturbation power spectrum, whose relationship to observed large scale structure is discussed in the context of simple cold-dark-matter biasing schemes. 86 refs

  9. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Directory of Open Access Journals (Sweden)

    Julie Vercelloni

    Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  10. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  11. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  12. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  13. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  14. Application of bamboo laminates in large-scale wind turbine blade design?

    Institute of Scientific and Technical Information of China (English)

    Long WANG; Hui LI; Tongguang WANG

    2016-01-01

    From the viewpoint of material and structure in the design of bamboo blades of large-scale wind turbine, a series of mechanical property tests of bamboo laminates as the major enhancement materials for blades are presented. The basic mechanical characteristics needed in the design of bamboo blades are brie?y introduced. Based on these data, the aerodynamic-structural integrated design of a 1.5 MW wind turbine bamboo blade relying on a conventional platform of upwind, variable speed, variable pitch, and doubly-fed generator is carried out. The process of the structural layer design of bamboo blades is documented in detail. The structural strength and fatigue life of the designed wind turbine blades are certified. The technical issues raised from the design are discussed. Key problems and direction of the future study are also summarized.

  15. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  16. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  17. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  18. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  19. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  20. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  1. Gowdy phenomenology in scale-invariant variables

    International Nuclear Information System (INIS)

    Andersson, Lars; Elst, Henk van; Uggla, Claes

    2004-01-01

    The dynamics of Gowdy vacuum spacetimes is considered in terms of Hubble-normalized scale-invariant variables, using the timelike area temporal gauge. The resulting state space formulation provides for a simple mechanism for the formation of 'false' and 'true spikes' in the approach to the singularity, and a geometrical formulation for the local attractor

  2. Estimating Catchment-Scale Snowpack Variability in Complex Forested Terrain, Valles Caldera National Preserve, NM

    Science.gov (United States)

    Harpold, A. A.; Brooks, P. D.; Biederman, J. A.; Swetnam, T.

    2011-12-01

    Difficulty estimating snowpack variability across complex forested terrain currently hinders the prediction of water resources in the semi-arid Southwestern U.S. Catchment-scale estimates of snowpack variability are necessary for addressing ecological, hydrological, and water resources issues, but are often interpolated from a small number of point-scale observations. In this study, we used LiDAR-derived distributed datasets to investigate how elevation, aspect, topography, and vegetation interact to control catchment-scale snowpack variability. The study area is the Redondo massif in the Valles Caldera National Preserve, NM, a resurgent dome that varies from 2500 to 3430 m and drains from all aspects. Mean LiDAR-derived snow depths from four catchments (2.2 to 3.4 km^2) draining different aspects of the Redondo massif varied by 30%, despite similar mean elevations and mixed conifer forest cover. To better quantify this variability in snow depths we performed a multiple linear regression (MLR) at a 7.3 by 7.3 km study area (5 x 106 snow depth measurements) comprising the four catchments. The MLR showed that elevation explained 45% of the variability in snow depths across the study area, aspect explained 18% (dominated by N-S aspect), and vegetation 2% (canopy density and height). This linear relationship was not transferable to the catchment-scale however, where additional MLR analyses showed the influence of aspect and elevation differed between the catchments. The strong influence of North-South aspect in most catchments indicated that the solar radiation is an important control on snow depth variability. To explore the role of solar radiation, a model was used to generate winter solar forcing index (SFI) values based on the local and remote topography. The SFI was able to explain a large amount of snow depth variability in areas with similar elevation and aspect. Finally, the SFI was modified to include the effects of shading from vegetation (in and out of

  3. United States Temperature and Precipitation Extremes: Phenomenology, Large-Scale Organization, Physical Mechanisms and Model Representation

    Science.gov (United States)

    Black, R. X.

    2017-12-01

    We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.

  4. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  5. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  6. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed; Elsawy, Hesham; Gharbieh, Mohammad; Alouini, Mohamed-Slim; Adinoyi, Abdulkareem; Alshaalan, Furaih

    2017-01-01

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end

  7. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    Full Text Available Birds are highly mobile organisms and there is increasing evidence that studies at large spatial scales are needed if we are to properly understand their population dynamics. While classical metapopulation models have rarely proved useful for birds, more general metapopulation ideas involving collections of populations interacting within spatially structured landscapes are highly relevant (Harrison, 1994. There is increasing interest in understanding patterns of synchrony, or lack of synchrony, between populations and the environmental and dispersal mechanisms that bring about these patterns (Paradis et al., 2000. To investigate these processes we need to measure abundance, demographic rates and dispersal at large spatial scales, in addition to gathering data on relevant environmental variables. There is an increasing realisation that conservation needs to address rapid declines of common and widespread species (they will not remain so if such trends continue as well as the management of small populations that are at risk of extinction. While the knowledge needed to support the management of small populations can often be obtained from intensive studies in a few restricted areas, conservation of widespread species often requires information on population trends and processes measured at regional, national and continental scales (Baillie, 2001. While management prescriptions for widespread populations may initially be developed from a small number of local studies or experiments, there is an increasing need to understand how such results will scale up when applied across wider areas. There is also a vital role for monitoring at large spatial scales both in identifying such population declines and in assessing population recovery. Gathering data on avian abundance and demography at large spatial scales usually relies on the efforts of large numbers of skilled volunteers. Volunteer studies based on ringing (for example Constant Effort Sites [CES

  8. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  9. A study to solve the variability of wind generation through integration of large-scale hydraulic generation; Um estudo para resolver a variabilidade da geracao eolica atraves da integracao em larga escala com geracao hidraulica

    Energy Technology Data Exchange (ETDEWEB)

    Emmerik, Emanuel Leonardus van; Steinberger, Johann Michael; Aredes, Mauricio [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEE/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Eletrica

    2010-07-01

    The optimal deployment of wind generation with the hydro generation is being investigated as a viable option to assist in resolving the constraints coming ahead as a consequence of the tendency of recovery in the Brazilian Amazon basin for expansion of generating facilities. It is in the validity of this research that this work is focused. The value is shown of feasibility studies of using water power generation to offset the variability of wind generation when it is deployed on a large scale. Preliminary results are presented for the variability of wind generation at various cycles, the variability of the availability of hydropower. (author)

  10. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    Science.gov (United States)

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  11. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  12. Spatial scales of pollution from variable resolution satellite imaging

    International Nuclear Information System (INIS)

    Chudnovsky, Alexandra A.; Kostinski, Alex; Lyapustin, Alexei; Koutrakis, Petros

    2013-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) provides daily global coverage, but the 10 km resolution of its aerosol optical depth (AOD) product is not adequate for studying spatial variability of aerosols in urban areas. Recently, a new Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm was developed for MODIS which provides AOD at 1 km resolution. Using MAIAC data, the relationship between MAIAC AOD and PM 2.5 as measured by the EPA ground monitoring stations was investigated at varying spatial scales. Our analysis suggested that the correlation between PM 2.5 and AOD decreased significantly as AOD resolution was degraded. This is so despite the intrinsic mismatch between PM 2.5 ground level measurements and AOD vertically integrated measurements. Furthermore, the fine resolution results indicated spatial variability in particle concentration at a sub-10 km scale. Finally, this spatial variability of AOD within the urban domain was shown to depend on PM 2.5 levels and wind speed. - Highlights: ► The correlation between PM 2.5 and AOD decreases as AOD resolution is degraded. ► High resolution MAIAC AOD 1 km retrieval can be used to investigate within-city PM 2.5 variability. ► Low pollution days exhibit higher spatial variability of AOD and PM 2.5 then moderate pollution days. ► AOD spatial variability within urban area is higher during the lower wind speed conditions. - The correlation between PM 2.5 and AOD decreases as AOD resolution is degraded. The new high-resolution MAIAC AOD retrieval has the potential to capture PM 2.5 variability at the intra-urban scale.

  13. Variability in warm-season atmospheric circulation and precipitation patterns over subtropical South America: relationships between the South Atlantic convergence zone and large-scale organized convection over the La Plata basin

    Science.gov (United States)

    Mattingly, Kyle S.; Mote, Thomas L.

    2017-01-01

    Warm-season precipitation variability over subtropical South America is characterized by an inverse relationship between the South Atlantic convergence zone (SACZ) and precipitation over the central and western La Plata basin of southeastern South America. This study extends the analysis of this "South American Seesaw" precipitation dipole to relationships between the SACZ and large, long-lived mesoscale convective systems (LLCSs) over the La Plata basin. By classifying SACZ events into distinct continental and oceanic categories and building a logistic regression model that relates LLCS activity across the region to continental and oceanic SACZ precipitation, a detailed account of spatial variability in the out-of-phase coupling between the SACZ and large-scale organized convection over the La Plata basin is provided. Enhanced precipitation in the continental SACZ is found to result in increased LLCS activity over northern, northeastern, and western sections of the La Plata basin, in association with poleward atmospheric moisture flux from the Amazon basin toward these regions, and a decrease in the probability of LLCS occurrence over the southeastern La Plata basin. Increased oceanic SACZ precipitation, however, was strongly related to reduced atmospheric moisture and decreased probability of LLCS occurrence over nearly the entire La Plata basin. These results suggest that continental SACZ activity and large-scale organized convection over the northern and eastern sections of the La Plata basin are closely tied to atmospheric moisture transport from the Amazon basin, while the warm coastal Brazil Current may also play an important role as an evaporative moisture source for LLCSs over the central and western La Plata basin.

  14. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Science.gov (United States)

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  15. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    Directory of Open Access Journals (Sweden)

    Xianlei Dong

    Full Text Available Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  16. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  17. Fossil fleet transition with fuel changes and large scale variable renewable integration

    Energy Technology Data Exchange (ETDEWEB)

    James, Revis [Electric Power Research Institute, Palo Alto, CA (United States); Hesler, Stephen [Electric Power Research Institute, Palo Alto, CA (United States); Bistline, John [Electric Power Research Institute, Palo Alto, CA (United States)

    2015-03-31

    Variability in demand as seen by grid-connected dispatchable generators can increase due to factors such as greater production from variable generation assets (for example, wind and solar), increased reliance on demand response or customer-driven automation, and aggregation of loads. This variability results a need for these generators to operate in a range of different modes, collectively referred to as “flexible operations.” This study is designed to inform power companies, researchers, and policymakers of the scope and trends in increasing levels of flexible operations as well as reliability challenges and impacts for dispatchable assets. Background Because there is rarely a direct monetization of the value of operational flexibility, the decision to provide such flexibility is typically dependent on unit- and region-specific decisions made by asset owners. It is very likely that much greater and more widespread flexible operations capabilities will be needed due to increased variability in demand seen by grid-connected generators, uncertainty regarding investment in new units to provide adequate operational flexibility, and the retirement of older, uncontrolled sub-critical pulverized coal units. Objective To enhance understanding of the technical challenges and operational impacts associated with dispatchable assets needed to increase operational flexibility and support variable demand. Approach The study approach consists of three elements: a literature review of relevant prior studies, analysis of detailed scenarios for evolution of the future fleet over the next 35 years, and engineering assessment of the degree and scope of technical challenges associated with transformation to the future fleet. The study approach integrated two key elements rarely brought together in a single analysis—1) long-term capacity planning, which enables modeling of unit retirements and new asset investments, and 2) unit commitment analysis, which permits examination of

  18. Double inflation: A possible resolution of the large-scale structure problem

    International Nuclear Information System (INIS)

    Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.

    1986-11-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs

  19. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  20. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  1. Large-scale runoff generation – parsimonious parameterisation using high-resolution topography

    Directory of Open Access Journals (Sweden)

    L. Gong

    2011-08-01

    Full Text Available World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm

  2. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  3. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.

    Science.gov (United States)

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system

  4. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  5. On the Phenomenology of an Accelerated Large-Scale Universe

    Directory of Open Access Journals (Sweden)

    Martiros Khurshudyan

    2016-10-01

    Full Text Available In this review paper, several new results towards the explanation of the accelerated expansion of the large-scale universe is discussed. On the other hand, inflation is the early-time accelerated era and the universe is symmetric in the sense of accelerated expansion. The accelerated expansion of is one of the long standing problems in modern cosmology, and physics in general. There are several well defined approaches to solve this problem. One of them is an assumption concerning the existence of dark energy in recent universe. It is believed that dark energy is responsible for antigravity, while dark matter has gravitational nature and is responsible, in general, for structure formation. A different approach is an appropriate modification of general relativity including, for instance, f ( R and f ( T theories of gravity. On the other hand, attempts to build theories of quantum gravity and assumptions about existence of extra dimensions, possible variability of the gravitational constant and the speed of the light (among others, provide interesting modifications of general relativity applicable to problems of modern cosmology, too. In particular, here two groups of cosmological models are discussed. In the first group the problem of the accelerated expansion of large-scale universe is discussed involving a new idea, named the varying ghost dark energy. On the other hand, the second group contains cosmological models addressed to the same problem involving either new parameterizations of the equation of state parameter of dark energy (like varying polytropic gas, or nonlinear interactions between dark energy and dark matter. Moreover, for cosmological models involving varying ghost dark energy, massless particle creation in appropriate radiation dominated universe (when the background dynamics is due to general relativity is demonstrated as well. Exploring the nature of the accelerated expansion of the large-scale universe involving generalized

  6. Large-scale weather dynamics during the 2015 haze event in Singapore

    Science.gov (United States)

    Djamil, Yudha; Lee, Wen-Chien; Tien Dat, Pham; Kuwata, Mikinori

    2017-04-01

    The 2015 haze event in South East Asia is widely considered as a period of the worst air quality in the region in more than a decade. The source of the haze was from forest and peatland fire in Sumatra and Kalimantan Islands, Indonesia. The fires were mostly came from the practice of forest clearance known as slash and burn, to be converted to palm oil plantation. Such practice of clearance although occurs seasonally but at 2015 it became worst by the impact of strong El Nino. The long period of dryer atmosphere over the region due to El Nino makes the fire easier to ignite, spread and difficult to stop. The biomass emission from the forest and peatland fire caused large-scale haze pollution problem in both Islands and further spread into the neighboring countries such as Singapore and Malaysia. In Singapore, for about two months (September-October, 2015) the air quality was in the unhealthy level. Such unfortunate condition caused some socioeconomic losses such as school closure, cancellation of outdoor events, health issues and many more with total losses estimated as S700 million. The unhealthy level of Singapore's air quality is based on the increasing pollutant standard index (PSI>120) due to the haze arrival, it even reached a hazardous level (PSI= 300) for several days. PSI is a metric of air quality in Singapore that aggregate six pollutants (SO2, PM10, PM2.5, NO2, CO and O3). In this study, we focused on PSI variability in weekly-biweekly time scales (periodicity < 30 days) since it is the least understood compare to their diurnal and seasonal scales. We have identified three dominant time scales of PSI ( 5, 10 and 20 days) using Wavelet method and investigated their large-scale atmospheric structures. The PSI associated large-scale column moisture horizontal structures over the Indo-Pacific basin are dominated by easterly propagating gyres in synoptic (macro) scale for the 5 days ( 10 and 20 days) time scales. The propagating gyres manifest as cyclical

  7. The sea-level budget along the Northwest Atlantic coast : GIA, mass changes, and large-scale ocean dynamics

    NARCIS (Netherlands)

    Frederikse, T.; Simon, K.M.; Katsman, C.A.; Riva, R.E.M.

    2017-01-01

    Sea-level rise and decadal variability along the northwestern coast of the North Atlantic Ocean are studied in a self-consistent framework that takes into account the effects of solid-earth deformation and geoid changes due to large-scale mass redistribution processes. Observations of sea and

  8. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  9. Scaling laws for perturbations in the ocean–atmosphere system following large CO2 emissions

    OpenAIRE

    Towles, N.; Olson, P.; Gnanadesikan, A.

    2015-01-01

    Scaling relationships are derived for the perturbations to atmosphere and ocean variables from large transient CO2 emissions. Using the carbon cycle model LOSCAR (Zeebe et al., 2009; Zeebe, 2012b) we calculate perturbations to atmosphere temperature and total carbon, ocean temperature, total ocean carbon, pH, and alkalinity, marine sediment carbon, plus carbon-13 isotope anomalies in the ocean and atmosphere resulting from idealized CO2 emission events. The...

  10. Regional and landscape-scale variability of Landsat-observed vegetation dynamics in northwest Siberian tundra

    International Nuclear Information System (INIS)

    Frost, Gerald V; Epstein, Howard E; Walker, Donald A

    2014-01-01

    Widespread increases in Arctic tundra productivity have been documented for decades using coarse-scale satellite observations, but finer-scale observations indicate that changes have been very uneven, with a high degree of landscape- and regional-scale heterogeneity. Here we analyze time-series of the Normalized Difference Vegetation Index (NDVI) observed by Landsat (1984–2012), to assess landscape- and regional-scale variability of tundra vegetation dynamics in the northwest Siberian Low Arctic, a little-studied region with varied soils, landscape histories, and permafrost attributes. We also estimate spatio-temporal rates of land-cover change associated with expansion of tall alder (Alnus) shrublands, by integrating Landsat time-series with very-high-resolution imagery dating to the mid-1960s. We compiled Landsat time-series for eleven widely-distributed landscapes, and performed linear regression of NDVI values on a per-pixel basis. We found positive net NDVI trends (‘greening’) in nine of eleven landscapes. Net greening occurred in alder shrublands in all landscapes, and strong greening tended to correspond to shrublands that developed since the 1960s. Much of the spatial variability of greening within landscapes was linked to landscape physiography and permafrost attributes, while between-landscape variability largely corresponded to differences in surficial geology. We conclude that continued increases in tundra productivity in the region are likely in upland tundra landscapes with fine-textured, cryoturbated soils; these areas currently tend to support discontinuous vegetation cover, but are highly susceptible to rapid increases in vegetation cover, as well as land-cover changes associated with the development of tall shrublands. (paper)

  11. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  12. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  13. A modular approach to large-scale design optimization of aerospace systems

    Science.gov (United States)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  14. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  15. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  16. Large curvature and background scale independence in single-metric approximations to asymptotic safety

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Tim R. [STAG Research Centre & Department of Physics and Astronomy, University of Southampton,Highfield, Southampton, SO17 1BJ (United Kingdom)

    2016-11-25

    In single-metric approximations to the exact renormalization group (RG) for quantum gravity, it has been not been clear how to treat the large curvature domain beyond the point where the effective cutoff scale k is less than the lowest eigenvalue of the appropriate modified Laplacian. We explain why this puzzle arises from background dependence, resulting in Wilsonian RG concepts being inapplicable. We show that when properly formulated over an ensemble of backgrounds, the Wilsonian RG can be restored. This in turn implies that solutions should be smooth and well defined no matter how large the curvature is taken. Even for the standard single-metric type approximation schemes, this construction can be rigorously derived by imposing a modified Ward identity (mWI) corresponding to rescaling the background metric by a constant factor. However compatibility in this approximation requires the space-time dimension to be six. Solving the mWI and flow equation simultaneously, new variables are then derived that are independent of overall background scale.

  17. The causality analysis of climate change and large-scale human crisis.

    Science.gov (United States)

    Zhang, David D; Lee, Harry F; Wang, Cong; Li, Baosheng; Pei, Qing; Zhang, Jane; An, Yulun

    2011-10-18

    Recent studies have shown strong temporal correlations between past climate changes and societal crises. However, the specific causal mechanisms underlying this relation have not been addressed. We explored quantitative responses of 14 fine-grained agro-ecological, socioeconomic, and demographic variables to climate fluctuations from A.D. 1500-1800 in Europe. Results show that cooling from A.D. 1560-1660 caused successive agro-ecological, socioeconomic, and demographic catastrophes, leading to the General Crisis of the Seventeenth Century. We identified a set of causal linkages between climate change and human crisis. Using temperature data and climate-driven economic variables, we simulated the alternation of defined "golden" and "dark" ages in Europe and the Northern Hemisphere during the past millennium. Our findings indicate that climate change was the ultimate cause, and climate-driven economic downturn was the direct cause, of large-scale human crises in preindustrial Europe and the Northern Hemisphere.

  18. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  19. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  20. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  1. Cross-scale intercomparison of climate change impacts simulated by regional and global hydrological models in eleven large river basins

    Energy Technology Data Exchange (ETDEWEB)

    Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Flörke, M.; Huang, S.; Motovilov, Y.; Buda, S.; Yang, T.; Müller, C.; Leng, G.; Tang, Q.; Portmann, F. T.; Hagemann, S.; Gerten, D.; Wada, Y.; Masaki, Y.; Alemayehu, T.; Satoh, Y.; Samaniego, L.

    2017-01-04

    Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.

  2. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC......) strategy to optimise the economic performances of the energy systems and to balance the power production and consumption. In the case of large-scale energy systems, the electrical grid connects a high number of power units. Because of this, the related control problem involves a high number of variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...

  3. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  4. Large-scale weakly supervised object localization via latent category learning.

    Science.gov (United States)

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  5. Improved flexibility with large-scale variable renewable power in cities through optimal demand side management and power-to-heat conversion

    International Nuclear Information System (INIS)

    Salpakari, Jyri; Mikkola, Jani; Lund, Peter D.

    2016-01-01

    Highlights: • New models for optimal control of shiftable loads and power-to-heat conversion. • Full technical and economic potential with optimal controls. • Detailed time series of shiftable loads based on empirical data. • Case study of Helsinki (Finland) with over 90% share of district heating. • Positive net present values in cost-optimal operation. - Abstract: Solar and wind power are potential carbon-free energy solutions for urban areas, but they are also subject to large variability. At the same time, urban areas offer promising flexibility solutions for balancing variable renewable power. This paper presents models for optimal control of power-to-heat conversion to heating systems and shiftable loads in cities to incorporate large variable renewable power schemes. The power-to-heat systems comprise heat pumps, electric boilers, and thermal storage. The control strategies comprise optimal matching of load and production, and cost-optimal market participation with investment analysis. All analyses are based on hourly data. The models are applied to a case study in Helsinki, Finland. For a scheme providing ca. 50% of all electricity in the city through self-consumption of variable renewables, power-to-heat with thermal storage could absorb all the surplus production. A significant reduction in the net load magnitude was obtained with shiftable loads. Investments to both power-to-heat and load shifting with electric heating and commercial refrigeration have a positive net present value if the resources are controlled cost-optimally.

  6. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  7. Modelling temporal and large-scale spatial variability of soil respiration from soil water availability, temperature and vegetation productivity indices

    Science.gov (United States)

    Reichstein, M.; Rey, A.; Freibauer, A.; Tenhunen, J.; Valentini, R.; Soil Respiration Synthesis Team

    2003-04-01

    Field-chamber measurements of soil respiration from 17 different forest and shrubland sites in Europe and North America were summarized and analyzed with the goal to develop a model describing seasonal, inter-annual and spatial variability of soil respiration as affected by water availability, temperature and site properties. The analysis was performed at a daily and at a monthly time step. With the daily time step, the relative soil water content in the upper soil layer expressed as a fraction of field capacity was a good predictor of soil respiration at all sites. Among the site variables tested, those related to site productivity (e.g. leaf area index) correlated significantly with soil respiration, while carbon pool variables like standing biomass or the litter and soil carbon stocks did not show a clear relationship with soil respiration. Furthermore, it was evidenced that the effect of precipitation on soil respiration stretched beyond its direct effect via soil moisture. A general statistical non-linear regression model was developed to describe soil respiration as dependent on soil temperature, soil water content and site-specific maximum leaf area index. The model explained nearly two thirds of the temporal and inter-site variability of soil respiration with a mean absolute error of 0.82 µmol m-2 s-1. The parameterised model exhibits the following principal properties: 1) At a relative amount of upper-layer soil water of 16% of field capacity half-maximal soil respiration rates are reached. 2) The apparent temperature sensitivity of soil respiration measured as Q10 varies between 1 and 5 depending on soil temperature and water content. 3) Soil respiration under reference moisture and temperature conditions is linearly related to maximum site leaf area index. At a monthly time-scale we employed the approach by Raich et al. (2002, Global Change Biol. 8, 800-812) that used monthly precipitation and air temperature to globally predict soil respiration (T

  8. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  9. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  10. Large Scale EOF Analysis of Climate Data

    Science.gov (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  11. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  12. Spatial variability and macro‐scale drivers of growth for native and introduced Flathead Catfish populations

    Science.gov (United States)

    Massie, Danielle L.; Smith, Geoffrey; Bonvechio, Timothy F.; Bunch, Aaron J.; Lucchesi, David O.; Wagner, Tyler

    2018-01-01

    Quantifying spatial variability in fish growth and identifying large‐scale drivers of growth are fundamental to many conservation and management decisions. Although fish growth studies often focus on a single population, it is becoming increasingly clear that large‐scale studies are likely needed for addressing transboundary management needs. This is particularly true for species with high recreational value and for those with negative ecological consequences when introduced outside of their native range, such as the Flathead Catfish Pylodictis olivaris. This study quantified growth variability of the Flathead Catfish across a large portion of its contemporary range to determine whether growth differences existed between habitat types (i.e., reservoirs and rivers) and between native and introduced populations. Additionally, we investigated whether growth parameters varied as a function of latitude and time since introduction (for introduced populations). Length‐at‐age data from 26 populations across 11 states in the USA were modeled using a Bayesian hierarchical von Bertalanffy growth model. Population‐specific growth trajectories revealed large variation in Flathead Catfish growth and relatively high uncertainty in growth parameters for some populations. Relatively high uncertainty was also evident when comparing populations and when quantifying large‐scale patterns. Growth parameters (Brody growth coefficient [K] and theoretical maximum average length [L∞]) were not different (based on overlapping 90% credible intervals) between habitat types or between native and introduced populations. For populations within the introduced range of Flathead Catfish, latitude was negatively correlated with K. For native populations, we estimated an 85% probability that L∞ estimates were negatively correlated with latitude. Contrary to predictions, time since introduction was not correlated with growth parameters in introduced populations of Flathead Catfish

  13. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    pump flow meters, sediment trap weigh tanks , and beach profiling lidar. A detailed discussion of the original LSTF features and capabilities can be...ERDC/CHL CHETN-I-88 April 2016 Approved for public release; distribution is unlimited. Capabilities of the Large-Scale Sediment Transport...describes the Large-Scale Sediment Transport Facility (LSTF) and recent upgrades to the measurement systems. The purpose of these upgrades was to increase

  14. Spatiotemporal property and predictability of large-scale human mobility

    Science.gov (United States)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  15. Optimization of Large-Scale Structural Systems

    DEFF Research Database (Denmark)

    Jensen, F. M.

    solutions to small problems with one or two variables to the optimization of large structures such as bridges, ships and offshore structures. The methods used for salving these problems have evolved from being classical differential calculus and calculus of variation to very advanced numerical techniques...

  16. Long time scale hard X-ray variability in Seyfert 1 galaxies

    Science.gov (United States)

    Markowitz, Alex Gary

    This dissertation examines the relationship between long-term X-ray variability characteristics, black hole mass, and luminosity of Seyfert 1 Active Galactic Nuclei. High dynamic range power spectral density functions (PSDs) have been constructed for six Seyfert 1 galaxies. These PSDs show "breaks" or characteristic time scales, typically on the order of a few days. There is resemblance to PSDs of lower-mass Galactic X-ray binaries (XRBs), with the ratios of putative black hole masses and variability time scales approximately the same (106--7) between the two classes of objects. The data are consistent with a linear correlation between Seyfert PSD break time scale and black hole mass estimate; the relation extrapolates reasonably well over 6--7 orders of magnitude to XRBs. All of this strengthens the case for a physical similarity between Seyfert galaxies and XRBs. The first six years of RXTE monitoring of Seyfert 1s have been systematically analyzed to probe hard X-ray variability on multiple time scales in a total of 19 Seyfert is in an expansion of the survey of Markowitz & Edelson (2001). Correlations between variability amplitude, luminosity, and black hole mass are explored, the data support the model of PSD movement with black hole mass suggested by the PSD survey. All of the continuum variability results are consistent with relatively more massive black holes hosting larger X-ray emission regions, resulting in 'slower' observed variability. Nearly all sources in the sample exhibit stronger variability towards softer energies, consistent with softening as they brighten. Direct time-resolved spectral fitting has been performed on continuous RXTE monitoring of seven Seyfert is to study long-term spectral variability and Fe Kalpha variability characteristics. The Fe Kalpha line displays a wide range of behavior but varies less strongly than the broadband continuum. Overall, however, there is no strong evidence for correlated variability between the line and

  17. Problems of large-scale vertically-integrated aquaculture

    Energy Technology Data Exchange (ETDEWEB)

    Webber, H H; Riordan, P F

    1976-01-01

    The problems of vertically-integrated aquaculture are outlined; they are concerned with: species limitations (in the market, biological and technological); site selection, feed, manpower needs, and legal, institutional and financial requirements. The gaps in understanding of, and the constraints limiting, large-scale aquaculture are listed. Future action is recommended with respect to: types and diversity of species to be cultivated, marketing, biotechnology (seed supply, disease control, water quality and concerted effort), siting, feed, manpower, legal and institutional aids (granting of water rights, grants, tax breaks, duty-free imports, etc.), and adequate financing. The last of hard data based on experience suggests that large-scale vertically-integrated aquaculture is a high risk enterprise, and with the high capital investment required, banks and funding institutions are wary of supporting it. Investment in pilot projects is suggested to demonstrate that large-scale aquaculture can be a fully functional and successful business. Construction and operation of such pilot farms is judged to be in the interests of both the public and private sector.

  18. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  19. A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.

    Science.gov (United States)

    Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco

    2005-02-01

    Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.

  20. Contribution to the discussion of P.M. Fayers and David J. Hand: Causal variables, indicator variables and measurement scales: an example from quality of life

    DEFF Research Database (Denmark)

    Keiding, Niels

    2002-01-01

    Causal variables; Clinimetric scales; Composite scales; Construct validity; Measurement scales; Multi-item scales; Quality-of-life instruments......Causal variables; Clinimetric scales; Composite scales; Construct validity; Measurement scales; Multi-item scales; Quality-of-life instruments...

  1. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  2. SMES-UPS for large-scaled SC magnet system of LHD

    International Nuclear Information System (INIS)

    Yamada, Shuichi; Mito, T.; Chikaraishi, H.; Nishimura, A.; Kojima, H.; Nakanishi, Y.; Uede, T.; Satow, T.; Motojima, O.

    2003-01-01

    The LHD is an SC experimental fusion device of heliotron type. Eight sets of the helium compressors with total electric power of 3.5 MW are installed in the cryogenic system. The analytical studies of the SMES-UPS for the compressors under the deep voltage sag are reported in this paper. The amplitude and frequency of the voltage decrease gradually by the regenerating effect of the induction motors. The SMES-UPS system proposed in this report has the following functions; (1) variable frequency control, (2) regulations by ACR and AVR, and (3) rapid isolation and synchronous reconnection from the loads to grid line. We have demonstrated that SMES was useful for the large-scaled cryogenic system of the experimental fusion device

  3. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  4. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    Science.gov (United States)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  5. Five hundred years of gridded high-resolution precipitation reconstructions over Europe and the connection to large-scale circulation

    Energy Technology Data Exchange (ETDEWEB)

    Pauling, Andreas [University of Bern, Institute of Geography, Bern (Switzerland); Luterbacher, Juerg; Wanner, Heinz [University of Bern, Institute of Geography, Bern (Switzerland); National Center of Competence in Research (NCCR) in Climate, Bern (Switzerland); Casty, Carlo [University of Bern, Climate and Environmental Physics Institute, Bern (Switzerland)

    2006-03-15

    We present seasonal precipitation reconstructions for European land areas (30 W to 40 E/30-71 N; given on a 0.5 x 0.5 resolved grid) covering the period 1500-1900 together with gridded reanalysis from 1901 to 2000 (Mitchell and Jones 2005). Principal component regression techniques were applied to develop this dataset. A large variety of long instrumental precipitation series, precipitation indices based on documentary evidence and natural proxies (tree-ring chronologies, ice cores, corals and a speleothem) that are sensitive to precipitation signals were used as predictors. Transfer functions were derived over the 1901-1983 calibration period and applied to 1500-1900 in order to reconstruct the large-scale precipitation fields over Europe. The performance (quality estimation based on unresolved variance within the calibration period) of the reconstructions varies over centuries, seasons and space. Highest reconstructive skill was found for winter over central Europe and the Iberian Peninsula. Precipitation variability over the last half millennium reveals both large interannual and decadal fluctuations. Applying running correlations, we found major non-stationarities in the relation between large-scale circulation and regional precipitation. For several periods during the last 500 years, we identified key atmospheric modes for southern Spain/northern Morocco and central Europe as representations of two precipitation regimes. Using scaled composite analysis, we show that precipitation extremes over central Europe and southern Spain are linked to distinct pressure patterns. Due to its high spatial and temporal resolution, this dataset allows detailed studies of regional precipitation variability for all seasons, impact studies on different time and space scales, comparisons with high-resolution climate models as well as analysis of connections with regional temperature reconstructions. (orig.)

  6. A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures

    Science.gov (United States)

    Kaveh, A.; Ilchi Ghazaan, M.

    2018-02-01

    In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.

  7. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  8. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  9. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  10. Variability of the raindrop size distribution at small spatial scales

    Science.gov (United States)

    Berne, A.; Jaffrain, J.

    2010-12-01

    Because of the interactions between atmospheric turbulence and cloud microphysics, the raindrop size distribution (DSD) is strongly variable in space and time. The spatial variability of the DSD at small spatial scales (below a few km) is not well documented and not well understood, mainly because of a lack of adequate measurements at the appropriate resolutions. A network of 16 disdrometers (Parsivels) has been designed and set up over EPFL campus in Lausanne, Switzerland. This network covers a typical operational weather radar pixel of 1x1 km2. The question of the significance of the variability of the DSD at such small scales is relevant for radar remote sensing of rainfall because the DSD is often assumed to be uniform within a radar sample volume and because the Z-R relationships used to convert the measured radar reflectivity Z into rain rate R are usually derived from point measurements. Thanks to the number of disdrometers, it was possible to quantify the spatial variability of the DSD at the radar pixel scale and to show that it can be significant. In this contribution, we show that the variability of the total drop concentration, of the median volume diameter and of the rain rate are significant, taking into account the sampling uncertainty associated with disdrometer measurements. The influence of this variability on the Z-R relationship can be non-negligible. Finally, the spatial structure of the DSD is quantified using a geostatistical tool, the variogram, and indicates high spatial correlation within a radar pixel.

  11. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    Science.gov (United States)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  12. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  13. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This project will examine large-scale agricultural land acquisitions in nine West African countries -Burkina Faso, Guinea-Bissau, Guinea, Benin, Mali, Togo, Senegal, Niger, and Côte d'Ivoire. ... They will use the results to increase public awareness and knowledge about the consequences of large-scale land acquisitions.

  14. Moditored unsaturated soil transport processes as a support for large scale soil and water management

    Science.gov (United States)

    Vanclooster, Marnik

    2010-05-01

    The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.

  15. Large-scale motions in the universe: a review

    International Nuclear Information System (INIS)

    Burstein, D.

    1990-01-01

    The expansion of the universe can be retarded in localised regions within the universe both by the presence of gravity and by non-gravitational motions generated in the post-recombination universe. The motions of galaxies thus generated are called 'peculiar motions', and the amplitudes, size scales and coherence of these peculiar motions are among the most direct records of the structure of the universe. As such, measurements of these properties of the present-day universe provide some of the severest tests of cosmological theories. This is a review of the current evidence for large-scale motions of galaxies out to a distance of ∼5000 km s -1 (in an expanding universe, distance is proportional to radial velocity). 'Large-scale' in this context refers to motions that are correlated over size scales larger than the typical sizes of groups of galaxies, up to and including the size of the volume surveyed. To orient the reader into this relatively new field of study, a short modern history is given together with an explanation of the terminology. Careful consideration is given to the data used to measure the distances, and hence the peculiar motions, of galaxies. The evidence for large-scale motions is presented in a graphical fashion, using only the most reliable data for galaxies spanning a wide range in optical properties and over the complete range of galactic environments. The kinds of systematic errors that can affect this analysis are discussed, and the reliability of these motions is assessed. The predictions of two models of large-scale motion are compared to the observations, and special emphasis is placed on those motions in which our own Galaxy directly partakes. (author)

  16. Predictability of the recent slowdown and subsequent recovery of large-scale surface warming using statistical methods

    Science.gov (United States)

    Mann, Michael E.; Steinman, Byron A.; Miller, Sonya K.; Frankcombe, Leela M.; England, Matthew H.; Cheung, Anson H.

    2016-04-01

    The temporary slowdown in large-scale surface warming during the early 2000s has been attributed to both external and internal sources of climate variability. Using semiempirical estimates of the internal low-frequency variability component in Northern Hemisphere, Atlantic, and Pacific surface temperatures in concert with statistical hindcast experiments, we investigate whether the slowdown and its recent recovery were predictable. We conclude that the internal variability of the North Pacific, which played a critical role in the slowdown, does not appear to have been predictable using statistical forecast methods. An additional minor contribution from the North Atlantic, by contrast, appears to exhibit some predictability. While our analyses focus on combining semiempirical estimates of internal climatic variability with statistical hindcast experiments, possible implications for initialized model predictions are also discussed.

  17. Inter- and intra-storm variability of the isotope composition of precipitation in Southern Israel: Are local or large-scale factors responsible?

    International Nuclear Information System (INIS)

    Gat, J.R.; Adar, E.; Alpert, P.

    2002-01-01

    A detailed sequential rain sampling of rainstorms was carried out during the 1989/90 and 1990/91 rainy season in the coastal plain of Israel with an annual average of 530 mm of rain and in the western Negev where the average annual rainfall is 93 mm. On four occasions, rain was concurrently available at both stations. The variability of the isotope composition within a rainy spell is quite considerable but falls short of the range of isotopic values encountered during the total season. Different rainy episodes show distinguishable isotope compositions, which evidently are characteristic of a larger time/space niche than that of the momentary, local, rain event. This is confirmed by the good correlation between the mean isotope composition of concurrently sampled events at both stations. A 'rain amount effect' is not apparent when the amount-weighted data for each complete rain episode are compared, because any possible effect is masked by the inter-storm variability. However by singling out the data within each storm sequence separately, a moderate effect is seen. On the whole, the results seem to support the notion that the isotope data are determined by the large, synoptic scale, situation. However within the range of values characteristic of the origin of the air masses there is a pronounced dependence of the isotope composition on the extent of the cloud field associated with each event, which is interpreted as a measure of the degree of rainout from the air mass, i.e. a typical Rayleigh effect. Local effects related to momentary rain intensity contribute only to a residual modulation of the above-mentioned effects. (author)

  18. A route to explosive large-scale magnetic reconnection in a super-ion-scale current sheet

    Directory of Open Access Journals (Sweden)

    K. G. Tanaka

    2009-01-01

    Full Text Available How to trigger magnetic reconnection is one of the most interesting and important problems in space plasma physics. Recently, electron temperature anisotropy (αeo=Te⊥/Te|| at the center of a current sheet and non-local effect of the lower-hybrid drift instability (LHDI that develops at the current sheet edges have attracted attention in this context. In addition to these effects, here we also study the effects of ion temperature anisotropy (αio=Ti⊥/Ti||. Electron anisotropy effects are known to be helpless in a current sheet whose thickness is of ion-scale. In this range of current sheet thickness, the LHDI effects are shown to weaken substantially with a small increase in thickness and the obtained saturation level is too low for a large-scale reconnection to be achieved. Then we investigate whether introduction of electron and ion temperature anisotropies in the initial stage would couple with the LHDI effects to revive quick triggering of large-scale reconnection in a super-ion-scale current sheet. The results are as follows. (1 The initial electron temperature anisotropy is consumed very quickly when a number of minuscule magnetic islands (each lateral length is 1.5~3 times the ion inertial length form. These minuscule islands do not coalesce into a large-scale island to enable large-scale reconnection. (2 The subsequent LHDI effects disturb the current sheet filled with the small islands. This makes the triggering time scale to be accelerated substantially but does not enhance the saturation level of reconnected flux. (3 When the ion temperature anisotropy is added, it survives through the small island formation stage and makes even quicker triggering to happen when the LHDI effects set-in. Furthermore the saturation level is seen to be elevated by a factor of ~2 and large-scale reconnection is achieved only in this case. Comparison with two-dimensional simulations that exclude the LHDI effects confirms that the saturation level

  19. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    Science.gov (United States)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  20. Valuation of large variable annuity portfolios: Monte Carlo simulation and synthetic datasets

    Directory of Open Access Journals (Sweden)

    Gan Guojun

    2017-12-01

    Full Text Available Metamodeling techniques have recently been proposed to address the computational issues related to the valuation of large portfolios of variable annuity contracts. However, it is extremely diffcult, if not impossible, for researchers to obtain real datasets frominsurance companies in order to test their metamodeling techniques on such real datasets and publish the results in academic journals. To facilitate the development and dissemination of research related to the effcient valuation of large variable annuity portfolios, this paper creates a large synthetic portfolio of variable annuity contracts based on the properties of real portfolios of variable annuities and implements a simple Monte Carlo simulation engine for valuing the synthetic portfolio. In addition, this paper presents fair market values and Greeks for the synthetic portfolio of variable annuity contracts that are important quantities for managing the financial risks associated with variable annuities. The resulting datasets can be used by researchers to test and compare the performance of various metamodeling techniques.

  1. Spatial Downscaling of TRMM Precipitation Using Geostatistics and Fine Scale Environmental Variables

    Directory of Open Access Journals (Sweden)

    No-Wook Park

    2013-01-01

    Full Text Available A geostatistical downscaling scheme is presented and can generate fine scale precipitation information from coarse scale Tropical Rainfall Measuring Mission (TRMM data by incorporating auxiliary fine scale environmental variables. Within the geostatistical framework, the TRMM precipitation data are first decomposed into trend and residual components. Quantitative relationships between coarse scale TRMM data and environmental variables are then estimated via regression analysis and used to derive trend components at a fine scale. Next, the residual components, which are the differences between the trend components and the original TRMM data, are then downscaled at a target fine scale via area-to-point kriging. The trend and residual components are finally added to generate fine scale precipitation estimates. Stochastic simulation is also applied to the residual components in order to generate multiple alternative realizations and to compute uncertainty measures. From an experiment using a digital elevation model (DEM and normalized difference vegetation index (NDVI, the geostatistical downscaling scheme generated the downscaling results that reflected detailed characteristics with better predictive performance, when compared with downscaling without the environmental variables. Multiple realizations and uncertainty measures from simulation also provided useful information for interpretations and further environmental modeling.

  2. Large-scale structure observables in general relativity

    International Nuclear Information System (INIS)

    Jeong, Donghui; Schmidt, Fabian

    2015-01-01

    We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)

  3. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  4. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  5. Evaluation of Large-Scale Wing Vortex Wakes from Multi-Camera PIV Measurements in Free-Flight Laboratory

    Science.gov (United States)

    Carmer, Carl F. v.; Heider, André; Schröder, Andreas; Konrath, Robert; Agocs, Janos; Gilliot, Anne; Monnier, Jean-Claude

    Multiple-vortex systems of aircraft wakes have been investigated experimentally in a unique large-scale laboratory facility, the free-flight B20 catapult bench, ONERA Lille. 2D/2C PIV measurements have been performed in a translating reference frame, which provided time-resolved crossvelocity observations of the vortex systems in a Lagrangian frame normal to the wake axis. A PIV setup using a moving multiple-camera array and a variable double-frame time delay has been employed successfully. The large-scale quasi-2D structures of the wake-vortex system have been identified using the QW criterion based on the 2D velocity gradient tensor ∇H u, thus illustrating the temporal development of unequal-strength corotating vortex pairs in aircraft wakes for nondimensional times tU0/b≲45.

  6. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  7. Alignment between Satellite and Central Galaxies in the SDSS DR7: Dependence on Large-scale Environment

    Science.gov (United States)

    Wang, Peng; Luo, Yu; Kang, Xi; Libeskind, Noam I.; Wang, Lei; Zhang, Youcai; Tempel, Elmo; Guo, Quan

    2018-06-01

    The alignment between satellites and central galaxies has been studied in detail both in observational and theoretical works. The widely accepted fact is that satellites preferentially reside along the major axis of their central galaxy. However, the origin and large-scale environmental dependence of this alignment are still unknown. In an attempt to determine these variables, we use data constructed from Sloan Digital Sky Survey DR7 to investigate the large-scale environmental dependence of this alignment with emphasis on examining the alignment’s dependence on the color of the central galaxy. We find a very strong large-scale environmental dependence of the satellite–central alignment (SCA) in groups with blue centrals. Satellites of blue centrals in knots are preferentially located perpendicular to the major axes of the centrals, and the alignment angle decreases with environment, namely, when going from knots to voids. The alignment angle strongly depends on the {}0.1(g-r) color of centrals. We suggest that the SCA is the result of a competition between satellite accretion within large-scale structure (LSS) and galaxy evolution inside host halos. For groups containing red central galaxies, the SCA is mainly determined by the evolution effect, while for blue central dominated groups, the effect of the LSS plays a more important role, especially in knots. Our results provide an explanation for how the SCA forms within different large-scale environments. The perpendicular case in groups and knots with blue centrals may also provide insight into understanding similar polar arrangements, such as the formation of the Milky Way and Centaurus A’s satellite system.

  8. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  9. Variability of sea ice deformation rates in the Arctic and their relationship with basin-scale wind forcing

    Directory of Open Access Journals (Sweden)

    A. Herman

    2012-12-01

    Full Text Available The temporal variability of the moments of probability distribution functions (pdfs of total sea ice deformation rates in the Arctic is analyzed in the context of the basin-scale wind forcing acting on the ice. The pdfs are estimated for 594 satellite-derived sea ice deformation maps from 11 winter seasons between 1996/1997 and 2007/2008, provided by the RADARSAT Geophysical Processor System. The temporal scale analyzed equals 3 days. The moments of the pdfs, calculated for a range of spatial scales (12.5–900 km, have two dominating components of variability: a seasonal cycle, with deformation rates decreasing throughout winter towards a minimum in March; and a short-term, synoptic variability, strongly correlated with the area-averaged magnitude of the wind stress over the Arctic, estimated based on the NCEP-DOE Reanalysis-2 data (correlation coefficient of 0.71 for the mean deformation rate. Due to scaling properties of the moments, logarithms of higher moments are strongly correlated with the wind stress as well. Exceptions are observed only at small spatial scales, as a result of extreme deformation events, not directly associated with large-scale wind forcing. By repeating the analysis within regions of different sizes and locations, we show that the wind–ice deformation correlation is largest at the basin scale and decreases with decreasing size of the area of study. Finally, we suggest that a positive trend in seasonally averaged correlation between sea ice deformation rates and the wind forcing, present in the analyzed data, may be related to an observed decrease in the multi-year ice area in the Arctic, indicating possibly even stronger correlations in the future.

  10. Contribution of large-scale coherent structures towards the cross flow in two interconnected channels

    International Nuclear Information System (INIS)

    Mahmood, A.; Rohde, M.; Hagen, T.H.J.J. van der; Mudde, R.F.

    2009-01-01

    Single phase cross flow through a gap region joining two vertical channels has been investigated experimentally for Reynolds numbers, based on the channels hydraulic diameter, ranging from 850 to 21000. The flow field in the gap region is investigated by 2D-PIV and the inter channel mass transfer is quantified by the tracer injection method. Experiments carried out for variable gap heights and shape show the existence of a street of large-scale counter rotating vortices on either side of the channel-gap interface, resulting from the mean velocity gradient in the gap and the main channel region. The appearance of the coherent vortices is subject to a threshold associated with the difference between the maximum and the minimum average stream wise velocities in the channel and the gap region, respectively. The auto power spectral density of the cross velocity component in the gap region exhibits a slope of -3 in the inertial range, indicating the 2D nature of these vortices. The presence of the large-scale vortices enhances the mass transfer through the gap region by approximately 63% of the mass transferred by turbulent mixing alone. The inter-channel mass transfer, due to cross flow, is found to be dependent not only on the large-scale vortices characteristics, but also on the gap geometry. (author)

  11. The Plant Phenology Ontology: A New Informatics Resource for Large-Scale Integration of Plant Phenology Data.

    Science.gov (United States)

    Stucky, Brian J; Guralnick, Rob; Deck, John; Denny, Ellen G; Bolmgren, Kjell; Walls, Ramona

    2018-01-01

    Plant phenology - the timing of plant life-cycle events, such as flowering or leafing out - plays a fundamental role in the functioning of terrestrial ecosystems, including human agricultural systems. Because plant phenology is often linked with climatic variables, there is widespread interest in developing a deeper understanding of global plant phenology patterns and trends. Although phenology data from around the world are currently available, truly global analyses of plant phenology have so far been difficult because the organizations producing large-scale phenology data are using non-standardized terminologies and metrics during data collection and data processing. To address this problem, we have developed the Plant Phenology Ontology (PPO). The PPO provides the standardized vocabulary and semantic framework that is needed for large-scale integration of heterogeneous plant phenology data. Here, we describe the PPO, and we also report preliminary results of using the PPO and a new data processing pipeline to build a large dataset of phenology information from North America and Europe.

  12. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  13. Improving plot- and regional-scale crop models for simulating impacts of climate variability and extremes

    Science.gov (United States)

    Tao, F.; Rötter, R.

    2013-12-01

    Many studies on global climate report that climate variability is increasing with more frequent and intense extreme events1. There are quite large uncertainties from both the plot- and regional-scale models in simulating impacts of climate variability and extremes on crop development, growth and productivity2,3. One key to reducing the uncertainties is better exploitation of experimental data to eliminate crop model deficiencies and develop better algorithms that more adequately capture the impacts of extreme events, such as high temperature and drought, on crop performance4,5. In the present study, in a first step, the inter-annual variability in wheat yield and climate from 1971 to 2012 in Finland was investigated. Using statistical approaches the impacts of climate variability and extremes on wheat growth and productivity were quantified. In a second step, a plot-scale model, WOFOST6, and a regional-scale crop model, MCWLA7, were calibrated and validated, and applied to simulate wheat growth and yield variability from 1971-2012. Next, the estimated impacts of high temperature stress, cold damage, and drought stress on crop growth and productivity based on the statistical approaches, and on crop simulation models WOFOST and MCWLA were compared. Then, the impact mechanisms of climate extremes on crop growth and productivity in the WOFOST model and MCWLA model were identified, and subsequently, the various algorithm and impact functions were fitted against the long-term crop trial data. Finally, the impact mechanisms, algorithms and functions in WOFOST model and MCWLA model were improved to better simulate the impacts of climate variability and extremes, particularly high temperature stress, cold damage and drought stress for location-specific and large area climate impact assessments. Our studies provide a good example of how to improve, in parallel, the plot- and regional-scale models for simulating impacts of climate variability and extremes, as needed for

  14. Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing

    KAUST Repository

    Kumar, Abhishek; Verma, Mahendra K.; Sukhatme, Jai

    2017-01-01

    In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.

  15. Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing

    KAUST Repository

    Kumar, Abhishek

    2017-01-11

    In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.

  16. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  17. Large-scale preparation of hollow graphitic carbon nanospheres

    International Nuclear Information System (INIS)

    Feng, Jun; Li, Fu; Bai, Yu-Jun; Han, Fu-Dong; Qi, Yong-Xin; Lun, Ning; Lu, Xi-Feng

    2013-01-01

    Hollow graphitic carbon nanospheres (HGCNSs) were synthesized on large scale by a simple reaction between glucose and Mg at 550 °C in an autoclave. Characterization by X-ray diffraction, Raman spectroscopy and transmission electron microscopy demonstrates the formation of HGCNSs with an average diameter of 10 nm or so and a wall thickness of a few graphenes. The HGCNSs exhibit a reversible capacity of 391 mAh g −1 after 60 cycles when used as anode materials for Li-ion batteries. -- Graphical abstract: Hollow graphitic carbon nanospheres could be prepared on large scale by the simple reaction between glucose and Mg at 550 °C, which exhibit superior electrochemical performance to graphite. Highlights: ► Hollow graphitic carbon nanospheres (HGCNSs) were prepared on large scale at 550 °C ► The preparation is simple, effective and eco-friendly. ► The in situ yielded MgO nanocrystals promote the graphitization. ► The HGCNSs exhibit superior electrochemical performance to graphite.

  18. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  19. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  20. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  1. A large-scale investigation of microplastic contamination: Abundance and characteristics of microplastics in European beach sediment.

    Science.gov (United States)

    Lots, Froukje A E; Behrens, Paul; Vijver, Martina G; Horton, Alice A; Bosker, Thijs

    2017-10-15

    Here we present the large-scale distribution of microplastic contamination in beach sediment across Europe. Sediment samples were collected from 23 locations across 13 countries by citizen scientists, and analysed using a standard operating procedure. We found significant variability in the concentrations of microplastics, ranging from 72±24 to 1512±187 microplastics per kg of dry sediment, with high variability within sampling locations. Three hotspots of microplastic accumulation (>700 microplastics per kg of dry sediment) were found. There was limited variability in the physico-chemical characteristics of the plastics across sampling locations. The majority of the microplastics were fibrous, microplastics on European beaches giving insights into the nature and extent of the microplastic challenge. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Large-scale and synoptic meteorology in the south-east Pacific during the observations campaign VOCALS-REx in austral Spring 2008

    Directory of Open Access Journals (Sweden)

    T. Toniazzo

    2011-05-01

    Full Text Available We present a descriptive overview of the meteorology in the south eastern subtropical Pacific (SEP during the VOCALS-REx intensive observations campaign which was carried out between October and November 2008. Mainly based on data from operational analyses, forecasts, reanalysis, and satellite observations, we focus on spatio-temporal scales from synoptic to planetary. A climatological context is given within which the specific conditions observed during the campaign are placed, with particular reference to the relationships between the large-scale and the regional circulations. The mean circulations associated with the diurnal breeze systems are also discussed. We then provide a summary of the day-to-day synoptic-scale circulation, air-parcel trajectories, and cloud cover in the SEP during VOCALS-REx. Three meteorologically distinct periods of time are identified and the large-scale causes for their different character are discussed. The first period was characterised by significant variability associated with synoptic-scale systems interesting the SEP; while the two subsequent phases were affected by planetary-scale disturbances with a slower evolution. The changes between initial and later periods can be partly explained from the regular march of the annual cycle, but contributions from subseasonal variability and its teleconnections were important. Across the whole of the two months under consideration we find a significant correlation between the depth of the inversion-capped marine boundary layer (MBL and the amount of low cloud in the area of study. We discuss this correlation and argue that at least as a crude approximation a typical scaling may be applied relating MBL and cloud properties with the large-scale parameters of SSTs and tropospheric temperatures. These results are consistent with previously found empirical relationships involving lower-tropospheric stability.

  3. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  4. Large-scale retrieval for medical image analytics: A comprehensive review.

    Science.gov (United States)

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. COMPARISON OF MULTI-SCALE DIGITAL ELEVATION MODELS FOR DEFINING WATERWAYS AND CATCHMENTS OVER LARGE AREAS

    Directory of Open Access Journals (Sweden)

    B. Harris

    2012-07-01

    Full Text Available Digital Elevation Models (DEMs allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas are adequate for the creation of waterways and catchments at a regional scale.

  6. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  7. A cross-scale approach to understand drought-induced variability of sagebrush ecosystem productivity

    Science.gov (United States)

    Assal, T.; Anderson, P. J.

    2016-12-01

    Sagebrush (Artemisia spp.) mortality has recently been reported in the Upper Green River Basin (Wyoming, USA) of the sagebrush steppe of western North America. Numerous causes have been suggested, but recent drought (2012-13) is the likely mechanism of mortality in this water-limited ecosystem which provides critical habitat for many species of wildlife. An understanding of the variability in patterns of productivity with respect to climate is essential to exploit landscape scale remote sensing for detection of subtle changes associated with mortality in this sparse, uniformly vegetated ecosystem. We used the standardized precipitation index to characterize drought conditions and Moderate Resolution Imaging Spectroradiometer (MODIS) satellite imagery (250-m resolution) to characterize broad characteristics of growing season productivity. We calculated per-pixel growing season anomalies over a 16-year period (2000-2015) to identify the spatial and temporal variability in productivity. Metrics derived from Landsat satellite imagery (30-m resolution) were used to further investigate trends within anomalous areas at local scales. We found evidence to support an initial hypothesis that antecedent winter drought was most important in explaining reduced productivity. The results indicate drought effects were inconsistent over space and time. MODIS derived productivity deviated by more than four standard deviations in heavily impacted areas, but was well within the interannual variability in other areas. Growing season anomalies highlighted dramatic declines in productivity during the 2012 and 2013 growing seasons. However, large negative anomalies persisted in other areas during the 2014 growing season, indicating lag effects of drought. We are further investigating if the reduction in productivity is mediated by local biophysical properties. Our analysis identified spatially explicit patterns of ecosystem properties altered by severe drought which are consistent with

  8. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  9. Spatial scales of pollution from variable resolution satellite imaging.

    Science.gov (United States)

    Chudnovsky, Alexandra A; Kostinski, Alex; Lyapustin, Alexei; Koutrakis, Petros

    2013-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) provides daily global coverage, but the 10 km resolution of its aerosol optical depth (AOD) product is not adequate for studying spatial variability of aerosols in urban areas. Recently, a new Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm was developed for MODIS which provides AOD at 1 km resolution. Using MAIAC data, the relationship between MAIAC AOD and PM(2.5) as measured by the EPA ground monitoring stations was investigated at varying spatial scales. Our analysis suggested that the correlation between PM(2.5) and AOD decreased significantly as AOD resolution was degraded. This is so despite the intrinsic mismatch between PM(2.5) ground level measurements and AOD vertically integrated measurements. Furthermore, the fine resolution results indicated spatial variability in particle concentration at a sub-10 km scale. Finally, this spatial variability of AOD within the urban domain was shown to depend on PM(2.5) levels and wind speed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. How well do the GCMs/RCMs capture the multi-scale temporal variability of precipitation in the Southwestern United States?

    Science.gov (United States)

    Jiang, Peng; Gautam, Mahesh R.; Zhu, Jianting; Yu, Zhongbo

    2013-02-01

    SummaryMulti-scale temporal variability of precipitation has an established relationship with floods and droughts. In this paper, we present the diagnostics on the ability of 16 General Circulation Models (GCMs) from Bias Corrected and Downscaled (BCSD) World Climate Research Program's (WCRP's) Coupled Model Inter-comparison Project Phase 3 (CMIP3) projections and 10 Regional Climate Models (RCMs) that participated in the North American Regional Climate Change Assessment Program (NARCCAP) to represent multi-scale temporal variability determined from the observed station data. Four regions (Los Angeles, Las Vegas, Tucson, and Cimarron) in the Southwest United States are selected as they represent four different precipitation regions classified by clustering method. We investigate how storm properties and seasonal, inter-annual, and decadal precipitation variabilities differed between GCMs/RCMs and observed records in these regions. We find that current GCMs/RCMs tend to simulate longer storm duration and lower storm intensity compared to those from observed records. Most GCMs/RCMs fail to produce the high-intensity summer storms caused by local convective heat transport associated with the summer monsoon. Both inter-annual and decadal bands are present in the GCM/RCM-simulated precipitation time series; however, these do not line up to the patterns of large-scale ocean oscillations such as El Nino/La Nina Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO). Our results show that the studied GCMs/RCMs can capture long-term monthly mean as the examined data is bias-corrected and downscaled, but fail to simulate the multi-scale precipitation variability including flood generating extreme events, which suggests their inadequacy for studies on floods and droughts that are strongly associated with multi-scale temporal precipitation variability.

  11. Antipersistent dynamics in short time scale variability of self-potential signals

    Directory of Open Access Journals (Sweden)

    M. Ragosta

    2000-06-01

    Full Text Available Time scale properties of self-potential signals are investigated through the analysis of the second order structure function (variogram, a powerful tool to investigate the spatial and temporal variability of observational data. In this work we analyse two sequences of self-potential values measured by means of a geophysical monitoring array located in a seismically active area of Southern Italy. The range of scales investigated goes from a few minutes to several days. It is shown that signal fluctuations are characterised by two time scale ranges in which self-potential variability appears to follow slightly different dynamical behaviours. Results point to the presence of fractal, non stationary features expressing a long term correlation with scaling coefficients which are the clue of stabilising mechanisms. In the scale ranges in which the series show scale invariant behaviour, self-potentials evolve like fractional Brownian motions with anticorrelated increments typical of processes regulated by negative feedback mechanisms (antipersistence. On scales below about 6 h the strength of such an antipersistence appears to be slightly greater than that observed on larger time scales where the fluctuations are less efficiently stabilised.

  12. Global Wildfire Forecasts Using Large Scale Climate Indices

    Science.gov (United States)

    Shen, Huizhong; Tao, Shu

    2016-04-01

    Using weather readings, fire early warning can provided forecast 4-6 hour in advance to minimize fire loss. The benefit would be dramatically enhanced if relatively accurate long-term projection can be also provided. Here we present a novel method for predicting global fire season severity (FSS) at least three months in advance using multiple large-scale climate indices (CIs). The predictive ability is proven effective for various geographic locations and resolution. Globally, as well as in most continents, the El Niño Southern Oscillation (ENSO) is the dominant driving force controlling interannual FSS variability, whereas other CIs also play indispensable roles. We found that a moderate El Niño event is responsible for 465 (272-658 as interquartile range) Tg carbon release and an annual increase of 29,500 (24,500-34,800) deaths from inhalation exposure to air pollutants. Southeast Asia accounts for half of the deaths. Both intercorrelation and interaction of WPs and CIs are revealed, suggesting possible climate-induced modification of fire responses to weather conditions. Our models can benefit fire management in response to climate change.

  13. Local-scale changes in mean and heavy precipitation in Western Europe, climate change or internal variability?

    Science.gov (United States)

    Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.

    2017-09-01

    High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.

  14. Local-scale changes in mean and heavy precipitation in Western Europe, climate change or internal variability?

    Science.gov (United States)

    Aalbers, Emma E.; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart J. J. M.

    2018-06-01

    High-resolution climate information provided by e.g. regional climate models (RCMs) is valuable for exploring the changing weather under global warming, and assessing the local impact of climate change. While there is generally more confidence in the representativeness of simulated processes at higher resolutions, internal variability of the climate system—`noise', intrinsic to the chaotic nature of atmospheric and oceanic processes—is larger at smaller spatial scales as well, limiting the predictability of the climate signal. To quantify the internal variability and robustly estimate the climate signal, large initial-condition ensembles of climate simulations conducted with a single model provide essential information. We analyze a regional downscaling of a 16-member initial-condition ensemble over western Europe and the Alps at 0.11° resolution, similar to the highest resolution EURO-CORDEX simulations. We examine the strength of the forced climate response (signal) in mean and extreme daily precipitation with respect to noise due to internal variability, and find robust small-scale geographical features in the forced response, indicating regional differences in changes in the probability of events. However, individual ensemble members provide only limited information on the forced climate response, even for high levels of global warming. Although the results are based on a single RCM-GCM chain, we believe that they have general value in providing insight in the fraction of the uncertainty in high-resolution climate information that is irreducible, and can assist in the correct interpretation of fine-scale information in multi-model ensembles in terms of a forced response and noise due to internal variability.

  15. Directed partial correlation: inferring large-scale gene regulatory network through induced topology disruptions.

    Directory of Open Access Journals (Sweden)

    Yinyin Yuan

    Full Text Available Inferring regulatory relationships among many genes based on their temporal variation in transcript abundance has been a popular research topic. Due to the nature of microarray experiments, classical tools for time series analysis lose power since the number of variables far exceeds the number of the samples. In this paper, we describe some of the existing multivariate inference techniques that are applicable to hundreds of variables and show the potential challenges for small-sample, large-scale data. We propose a directed partial correlation (DPC method as an efficient and effective solution to regulatory network inference using these data. Specifically for genomic data, the proposed method is designed to deal with large-scale datasets. It combines the efficiency of partial correlation for setting up network topology by testing conditional independence, and the concept of Granger causality to assess topology change with induced interruptions. The idea is that when a transcription factor is induced artificially within a gene network, the disruption of the network by the induction signifies a genes role in transcriptional regulation. The benchmarking results using GeneNetWeaver, the simulator for the DREAM challenges, provide strong evidence of the outstanding performance of the proposed DPC method. When applied to real biological data, the inferred starch metabolism network in Arabidopsis reveals many biologically meaningful network modules worthy of further investigation. These results collectively suggest DPC is a versatile tool for genomics research. The R package DPC is available for download (http://code.google.com/p/dpcnet/.

  16. Millennial- to century-scale variability in Gulf of Mexico Holocene climate records

    Science.gov (United States)

    Poore, R.Z.; Dowsett, H.J.; Verardo, S.; Quinn, T.M.

    2003-01-01

    Proxy records from two piston cores in the Gulf of Mexico (GOM) provide a detailed (50-100 year resolution) record of climate variability over the last 14,000 years. Long-term (millennial-scale) trends and changes are related to the transition from glacial to interglacial conditions and movement of the average position of the Intertropical Convergence Zone (ITCZ) related to orbital forcing. The ??18O of the surface-dwelling planktic foraminifer Globigerinoides ruber show negative excursions between 14 and 10.2 ka (radiocarbon years) that reflect influx of meltwater into the western GOM during melting of the Laurentide Ice Sheet. The relative abundance of the planktic foraminifer Globigerinoides sacculifer is related to transport of Caribbean water into the GOM. Maximum transport of Caribbean surface waters and moisture into the GOM associated with a northward migration of the average position of the ITCZ occurs between about 6.5 and 4.5 ka. In addition, abundance variations of G. sacculifer show century-scale variability throughout most of the Holocene. The GOM record is consistent with records from other areas, suggesting that century-scale variability is a pervasive feature of Holocene climate. The frequency of several cycles in the climate records is similar to cycles identified in proxy records of solar variability, indicating that at least some of the century-scale climate variability during the Holocene is due to external (solar) forcing.

  17. Disclosure Control using Partially Synthetic Data for Large-Scale Health Surveys, with Applications to CanCORS

    OpenAIRE

    Loong, Bronwyn; Zaslavsky, Alan M.; He, Yulei; Harrington, David P.

    2013-01-01

    Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents’ identities and sensitive attributes, by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by CanCORS, a comprehensive observational study of the experiences, treat...

  18. Accelerating Relevance Vector Machine for Large-Scale Data on Spark

    Directory of Open Access Journals (Sweden)

    Liu Fang

    2017-01-01

    Full Text Available Relevance vector machine (RVM is a machine learning algorithm based on a sparse Bayesian framework, which performs well when running classification and regression tasks on small-scale datasets. However, RVM also has certain drawbacks which restricts its practical applications such as (1 slow training process, (2 poor performance on training large-scale datasets. In order to solve these problem, we propose Discrete AdaBoost RVM (DAB-RVM which incorporate ensemble learning in RVM at first. This method performs well with large-scale low-dimensional datasets. However, as the number of features increases, the training time of DAB-RVM increases as well. To avoid this phenomenon, we utilize the sufficient training samples of large-scale datasets and propose all features boosting RVM (AFB-RVM, which modifies the way of obtaining weak classifiers. In our experiments we study the differences between various boosting techniques with RVM, demonstrating the performance of the proposed approaches on Spark. As a result of this paper, two proposed approaches on Spark for different types of large-scale datasets are available.

  19. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer

    2017-11-09

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  20. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer; Itani, Hani; Ghanem, Bernard

    2017-01-01

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  1. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  2. Spatiotemporal dynamics of large-scale brain activity

    Science.gov (United States)

    Neuman, Jeremy

    Understanding the dynamics of large-scale brain activity is a tough challenge. One reason for this is the presence of an incredible amount of complexity arising from having roughly 100 billion neurons connected via 100 trillion synapses. Because of the extremely high number of degrees of freedom in the nervous system, the question of how the brain manages to properly function and remain stable, yet also be adaptable, must be posed. Neuroscientists have identified many ways the nervous system makes this possible, of which synaptic plasticity is possibly the most notable one. On the other hand, it is vital to understand how the nervous system also loses stability, resulting in neuropathological diseases such as epilepsy, a disease which affects 1% of the population. In the following work, we seek to answer some of these questions from two different perspectives. The first uses mean-field theory applied to neuronal populations, where the variables of interest are the percentages of active excitatory and inhibitory neurons in a network, to consider how the nervous system responds to external stimuli, self-organizes and generates epileptiform activity. The second method uses statistical field theory, in the framework of single neurons on a lattice, to study the concept of criticality, an idea borrowed from physics which posits that in some regime the brain operates in a collectively stable or marginally stable manner. This will be examined in two different neuronal networks with self-organized criticality serving as the overarching theme for the union of both perspectives. One of the biggest problems in neuroscience is the question of to what extent certain details are significant to the functioning of the brain. These details give rise to various spatiotemporal properties that at the smallest of scales explain the interaction of single neurons and synapses and at the largest of scales describe, for example, behaviors and sensations. In what follows, we will shed some

  3. Model-based plant-wide optimization of large-scale lignocellulosic bioethanol plants

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Blanke, Mogens; Jakobsen, Jon Geest

    2017-01-01

    Second generation biorefineries transform lignocellulosic biomass into chemicals with higher added value following a conversion mechanism that consists of: pretreatment, enzymatic hydrolysis, fermentation and purification. The objective of this study is to identify the optimal operational point...... with respect to maximum economic profit of a large scale biorefinery plant using a systematic model-based plantwide optimization methodology. The following key process parameters are identified as decision variables: pretreatment temperature, enzyme dosage in enzymatic hydrolysis, and yeast loading per batch...... in fermentation. The plant is treated in an integrated manner taking into account the interactions and trade-offs between the conversion steps. A sensitivity and uncertainty analysis follows at the optimal solution considering both model and feed parameters. It is found that the optimal point is more sensitive...

  4. Performance of automatic generation control mechanisms with large-scale wind power

    Energy Technology Data Exchange (ETDEWEB)

    Ummels, B.C.; Gibescu, M.; Paap, G.C. [Delft Univ. of Technology (Netherlands); Kling, W.L. [Transmission Operations Department of TenneT bv (Netherlands)

    2007-11-15

    The unpredictability and variability of wind power increasingly challenges real-time balancing of supply and demand in electric power systems. In liberalised markets, balancing is a responsibility jointly held by the TSO (real-time power balancing) and PRPs (energy programs). In this paper, a procedure is developed for the simulation of power system balancing and the assessment of AGC performance in the presence of large-scale wind power, using the Dutch control zone as a case study. The simulation results show that the performance of existing AGC-mechanisms is adequate for keeping ACE within acceptable bounds. At higher wind power penetrations, however, the capabilities of the generation mix are increasingly challenged and additional reserves are required at the same level. (au)

  5. How spatial and temporal rainfall variability affect runoff across basin scales: insights from field observations in the (semi-)urbanised Charlotte watershed

    Science.gov (United States)

    Ten Veldhuis, M. C.; Smith, J. A.; Zhou, Z.

    2017-12-01

    Impacts of rainfall variability on runoff response are highly scale-dependent. Sensitivity analyses based on hydrological model simulations have shown that impacts are likely to depend on combinations of storm type, basin versus storm scale, temporal versus spatial rainfall variability. So far, few of these conclusions have been confirmed on observational grounds, since high quality datasets of spatially variable rainfall and runoff over prolonged periods are rare. Here we investigate relationships between rainfall variability and runoff response based on 30 years of radar-rainfall datasets and flow measurements for 16 hydrological basins ranging from 7 to 111 km2. Basins vary not only in scale, but also in their degree of urbanisation. We investigated temporal and spatial variability characteristics of rainfall fields across a range of spatial and temporal scales to identify main drivers for variability in runoff response. We identified 3 ranges of basin size with different temporal versus spatial rainfall variability characteristics. Total rainfall volume proved to be the dominant agent determining runoff response at all basin scales, independent of their degree of urbanisation. Peak rainfall intensity and storm core volume are of secondary importance. This applies to all runoff parameters, including runoff volume, runoff peak, volume-to-peak and lag time. Position and movement of the storm with respect to the basin have a negligible influence on runoff response, with the exception of lag times in some of the larger basins. This highlights the importance of accuracy in rainfall estimation: getting the position right but the volume wrong will inevitably lead to large errors in runoff prediction. Our study helps to identify conditions where rainfall variability matters for correct estimation of the rainfall volume as well as the associated runoff response.

  6. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  7. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  8. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  9. Closing the sea level budget on a regional scale: Trends and variability on the Northwestern European continental shelf.

    Science.gov (United States)

    Frederikse, Thomas; Riva, Riccardo; Kleinherenbrink, Marcel; Wada, Yoshihide; van den Broeke, Michiel; Marzeion, Ben

    2016-10-28

    Long-term trends and decadal variability of sea level in the North Sea and along the Norwegian coast have been studied over the period 1958-2014. We model the spatially nonuniform sea level and solid earth response to large-scale ice melt and terrestrial water storage changes. GPS observations, corrected for the solid earth deformation, are used to estimate vertical land motion. We find a clear correlation between sea level in the North Sea and along the Norwegian coast and open ocean steric variability in the Bay of Biscay and west of Portugal, which is consistent with the presence of wind-driven coastally trapped waves. The observed nodal cycle is consistent with tidal equilibrium. We are able to explain the observed sea level trend over the period 1958-2014 well within the standard error of the sum of all contributing processes, as well as the large majority of the observed decadal sea level variability.

  10. Large-Scale Mapping and Predictive Modeling of Submerged Aquatic Vegetation in a Shallow Eutrophic Lake

    Directory of Open Access Journals (Sweden)

    Karl E. Havens

    2002-01-01

    Full Text Available A spatially intensive sampling program was developed for mapping the submerged aquatic vegetation (SAV over an area of approximately 20,000 ha in a large, shallow lake in Florida, U.S. The sampling program integrates Geographic Information System (GIS technology with traditional field sampling of SAV and has the capability of producing robust vegetation maps under a wide range of conditions, including high turbidity, variable depth (0 to 2 m, and variable sediment types. Based on sampling carried out in AugustœSeptember 2000, we measured 1,050 to 4,300 ha of vascular SAV species and approximately 14,000 ha of the macroalga Chara spp. The results were similar to those reported in the early 1990s, when the last large-scale SAV sampling occurred. Occurrence of Chara was strongly associated with peat sediments, and maximal depths of occurrence varied between sediment types (mud, sand, rock, and peat. A simple model of Chara occurrence, based only on water depth, had an accuracy of 55%. It predicted occurrence of Chara over large areas where the plant actually was not found. A model based on sediment type and depth had an accuracy of 75% and produced a spatial map very similar to that based on observations. While this approach needs to be validated with independent data in order to test its general utility, we believe it may have application elsewhere. The simple modeling approach could serve as a coarse-scale tool for evaluating effects of water level management on Chara populations.

  11. Large-scale mapping and predictive modeling of submerged aquatic vegetation in a shallow eutrophic lake.

    Science.gov (United States)

    Havens, Karl E; Harwell, Matthew C; Brady, Mark A; Sharfstein, Bruce; East, Therese L; Rodusky, Andrew J; Anson, Daniel; Maki, Ryan P

    2002-04-09

    A spatially intensive sampling program was developed for mapping the submerged aquatic vegetation (SAV) over an area of approximately 20,000 ha in a large, shallow lake in Florida, U.S. The sampling program integrates Geographic Information System (GIS) technology with traditional field sampling of SAV and has the capability of producing robust vegetation maps under a wide range of conditions, including high turbidity, variable depth (0 to 2 m), and variable sediment types. Based on sampling carried out in August-September 2000, we measured 1,050 to 4,300 ha of vascular SAV species and approximately 14,000 ha of the macroalga Chara spp. The results were similar to those reported in the early 1990s, when the last large-scale SAV sampling occurred. Occurrence of Chara was strongly associated with peat sediments, and maximal depths of occurrence varied between sediment types (mud, sand, rock, and peat). A simple model of Chara occurrence, based only on water depth, had an accuracy of 55%. It predicted occurrence of Chara over large areas where the plant actually was not found. A model based on sediment type and depth had an accuracy of 75% and produced a spatial map very similar to that based on observations. While this approach needs to be validated with independent data in order to test its general utility, we believe it may have application elsewhere. The simple modeling approach could serve as a coarse-scale tool for evaluating effects of water level management on Chara populations.

  12. On unravelling mechanism of interplay between cloud and large scale circulation: a grey area in climate science

    Science.gov (United States)

    De, S.; Agarwal, N. K.; Hazra, Anupam; Chaudhari, Hemantkumar S.; Sahai, A. K.

    2018-04-01

    The interaction between cloud and large scale circulation is much less explored area in climate science. Unfolding the mechanism of coupling between these two parameters is imperative for improved simulation of Indian summer monsoon (ISM) and to reduce imprecision in climate sensitivity of global climate model. This work has made an effort to explore this mechanism with CFSv2 climate model experiments whose cloud has been modified by changing the critical relative humidity (CRH) profile of model during ISM. Study reveals that the variable CRH in CFSv2 has improved the nonlinear interactions between high and low frequency oscillations in wind field (revealed as internal dynamics of monsoon) and modulates realistically the spatial distribution of interactions over Indian landmass during the contrasting monsoon season compared to the existing CRH profile of CFSv2. The lower tropospheric wind error energy in the variable CRH simulation of CFSv2 appears to be minimum due to the reduced nonlinear convergence of error to the planetary scale range from long and synoptic scales (another facet of internal dynamics) compared to as observed from other CRH experiments in normal and deficient monsoons. Hence, the interplay between cloud and large scale circulation through CRH may be manifested as a change in internal dynamics of ISM revealed from scale interactive quasi-linear and nonlinear kinetic energy exchanges in frequency as well as in wavenumber domain during the monsoon period that eventually modify the internal variance of CFSv2 model. Conversely, the reduced wind bias and proper modulation of spatial distribution of scale interaction between the synoptic and low frequency oscillations improve the eastward and northward extent of water vapour flux over Indian landmass that in turn give feedback to the realistic simulation of cloud condensates attributing improved ISM rainfall in CFSv2.

  13. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  14. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  15. The scales of variability of stream fish assemblage at tributary confluences

    Directory of Open Access Journals (Sweden)

    István Czeglédi

    2015-12-01

    Full Text Available Tributary confluences play an important role in the dispersal of organisms, and consequently, in shaping regional scale diversity in stream networks. Despite their importance in dispersal processes, little is known about how ecological assemblages are organized in these habitats. We studied the scales of variability of stream fish assemblages over three seasons using a hierarchical sampling design, which incorporated three tributaries, three sites at the mouth of each tributary and using four sampling units at each site. We found strong scale dependent variability in species richness, composition and relative abundance. Most of the variation was accounted for by the interactive effect of season, between stream and between site effects, while habitat structure of the sampling units had a relatively minor role. Species richness showed a continuous decrease from the mainstem river in most cases, while species composition and relative abundance changed less consistently along the longitudinal profile. Consequently, we found that not only the junctions presented a strong filter on the species pool, but some species were filtered out if they passed this critical habitat bottleneck. Spatial position of the tributaries along the river also contributed to assemblage variability in the confluences. Overall, our results suggest high variability in fish assemblages across multiple scales at tributary confluences. Environmental management should take a more critical care on the filtering role of tributary confluences in species dispersal, for better understanding patterns and processes in the branches of dendritic stream networks.

  16. Similitude and scaling of large structural elements: Case study

    Directory of Open Access Journals (Sweden)

    M. Shehadeh

    2015-06-01

    Full Text Available Scaled down models are widely used for experimental investigations of large structures due to the limitation in the capacities of testing facilities along with the expenses of the experimentation. The modeling accuracy depends upon the model material properties, fabrication accuracy and loading techniques. In the present work the Buckingham π theorem is used to develop the relations (i.e. geometry, loading and properties between the model and a large structural element as that is present in the huge existing petroleum oil drilling rigs. The model is to be designed, loaded and treated according to a set of similitude requirements that relate the model to the large structural element. Three independent scale factors which represent three fundamental dimensions, namely mass, length and time need to be selected for designing the scaled down model. Numerical prediction of the stress distribution within the model and its elastic deformation under steady loading is to be made. The results are compared with those obtained from the full scale structure numerical computations. The effect of scaled down model size and material on the accuracy of the modeling technique is thoroughly examined.

  17. Large-scale preparation of hollow graphitic carbon nanospheres

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Jun; Li, Fu [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Bai, Yu-Jun, E-mail: byj97@126.com [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); State Key laboratory of Crystal Materials, Shandong University, Jinan 250100 (China); Han, Fu-Dong; Qi, Yong-Xin; Lun, Ning [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Lu, Xi-Feng [Lunan Institute of Coal Chemical Engineering, Jining 272000 (China)

    2013-01-15

    Hollow graphitic carbon nanospheres (HGCNSs) were synthesized on large scale by a simple reaction between glucose and Mg at 550 Degree-Sign C in an autoclave. Characterization by X-ray diffraction, Raman spectroscopy and transmission electron microscopy demonstrates the formation of HGCNSs with an average diameter of 10 nm or so and a wall thickness of a few graphenes. The HGCNSs exhibit a reversible capacity of 391 mAh g{sup -1} after 60 cycles when used as anode materials for Li-ion batteries. -- Graphical abstract: Hollow graphitic carbon nanospheres could be prepared on large scale by the simple reaction between glucose and Mg at 550 Degree-Sign C, which exhibit superior electrochemical performance to graphite. Highlights: Black-Right-Pointing-Pointer Hollow graphitic carbon nanospheres (HGCNSs) were prepared on large scale at 550 Degree-Sign C Black-Right-Pointing-Pointer The preparation is simple, effective and eco-friendly. Black-Right-Pointing-Pointer The in situ yielded MgO nanocrystals promote the graphitization. Black-Right-Pointing-Pointer The HGCNSs exhibit superior electrochemical performance to graphite.

  18. Large-scale impact cratering on the terrestrial planets

    International Nuclear Information System (INIS)

    Grieve, R.A.F.

    1982-01-01

    The crater densities on the earth and moon form the basis for a standard flux-time curve that can be used in dating unsampled planetary surfaces and constraining the temporal history of endogenic geologic processes. Abundant evidence is seen not only that impact cratering was an important surface process in planetary history but also that large imapact events produced effects that were crucial in scale. By way of example, it is noted that the formation of multiring basins on the early moon was as important in defining the planetary tectonic framework as plate tectonics is on the earth. Evidence from several planets suggests that the effects of very-large-scale impacts go beyond the simple formation of an impact structure and serve to localize increased endogenic activity over an extended period of geologic time. Even though no longer occurring with the frequency and magnitude of early solar system history, it is noted that large scale impact events continue to affect the local geology of the planets. 92 references

  19. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    Science.gov (United States)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat

  20. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  1. Towards a More Biologically-meaningful Climate Characterization: Variability in Space and Time at Multiple Scales

    Science.gov (United States)

    Christianson, D. S.; Kaufman, C. G.; Kueppers, L. M.; Harte, J.

    2013-12-01

    Sampling limitations and current modeling capacity justify the common use of mean temperature values in summaries of historical climate and future projections. However, a monthly mean temperature representing a 1-km2 area on the landscape is often unable to capture the climate complexity driving organismal and ecological processes. Estimates of variability in addition to mean values are more biologically meaningful and have been shown to improve projections of range shifts for certain species. Historical analyses of variance and extreme events at coarse spatial scales, as well as coarse-scale projections, show increasing temporal variability in temperature with warmer means. Few studies have considered how spatial variance changes with warming, and analysis for both temporal and spatial variability across scales is lacking. It is unclear how the spatial variability of fine-scale conditions relevant to plant and animal individuals may change given warmer coarse-scale mean values. A change in spatial variability will affect the availability of suitable habitat on the landscape and thus, will influence future species ranges. By characterizing variability across both temporal and spatial scales, we can account for potential bias in species range projections that use coarse climate data and enable improvements to current models. In this study, we use temperature data at multiple spatial and temporal scales to characterize spatial and temporal variability under a warmer climate, i.e., increased mean temperatures. Observational data from the Sierra Nevada (California, USA), experimental climate manipulation data from the eastern and western slopes of the Rocky Mountains (Colorado, USA), projected CMIP5 data for California (USA) and observed PRISM data (USA) allow us to compare characteristics of a mean-variance relationship across spatial scales ranging from sub-meter2 to 10,000 km2 and across temporal scales ranging from hours to decades. Preliminary spatial analysis at

  2. A Statistical Model for Hourly Large-Scale Wind and Photovoltaic Generation in New Locations

    DEFF Research Database (Denmark)

    Ekstrom, Jussi; Koivisto, Matti Juhani; Mellin, Ilkka

    2017-01-01

    The analysis of large-scale wind and photovoltaic (PV) energy generation is of vital importance in power systems where their penetration is high. This paper presents a modular methodology to assess the power generation and volatility of a system consisting of both PV plants (PVPs) and wind power...... of new PVPs and WPPs in system planning. The model is verified against hourly measured wind speed and solar irradiance data from Finland. A case study assessing the impact of the geographical distribution of the PVPs and WPPs on aggregate power generation and its variability is presented....

  3. [A large-scale accident in Alpine terrain].

    Science.gov (United States)

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  4. Hierarchical Cantor set in the large scale structure with torus geometry

    Energy Technology Data Exchange (ETDEWEB)

    Murdzek, R. [Physics Department, ' Al. I. Cuza' University, Blvd. Carol I, Nr. 11, Iassy 700506 (Romania)], E-mail: rmurdzek@yahoo.com

    2008-12-15

    The formation of large scale structures is considered within a model with string on toroidal space-time. Firstly, the space-time geometry is presented. In this geometry, the Universe is represented by a string describing a torus surface. Thereafter, the large scale structure of the Universe is derived from the string oscillations. The results are in agreement with the cellular structure of the large scale distribution and with the theory of a Cantorian space-time.

  5. Important aspects of Eastern Mediterranean large-scale variability revealed from data of three fixed observatories

    Science.gov (United States)

    Bensi, Manuel; Velaoras, Dimitris; Cardin, Vanessa; Perivoliotis, Leonidas; Pethiakis, George

    2015-04-01

    Long-term variations of temperature and salinity observed in the Adriatic and Aegean Seas seem to be regulated by larger-scale circulation modes of the Eastern Mediterranean (EMed) Sea, such as the recently discovered feedback mechanisms, namely the BiOS (Bimodal Oscillating System) and the internal thermohaline pump theories. These theories are the results of interpretation of many years' observations, highlighting possible interactions between two key regions of the EMed. Although repeated oceanographic cruises carried out in the past or planned for the future are a very useful tool for understanding the interaction between the two basins (e.g. alternating dense water formation, salt ingressions), recent long time-series of high frequency (up to 1h) sampling have added valuable information to the interpretation of internal mechanisms for both areas (i.e. mesoscale eddies, evolution of fast internal processes, etc.). During the last 10 years, three deep observatories were deployed and maintained in the Adriatic, Ionian, and Aegean Seas: they are respectively, the E2-M3A, the Pylos, and the E1-M3A. All are part of the largest European network of Fixed Point Open Ocean Observatories (FixO3, http://www.fixo3.eu/). Herein, from the analysis of temperature and salinity, and potential density time series collected at the three sites from the surface down to the intermediate and deep layers, we will discuss the almost perfect anti-correlated behavior between the Adriatic and the Aegean Seas. Our data, collected almost continuously since 2006, reveal that these observatories well represent the thermohaline variability of their own areas. Interestingly, temperature and salinity in the intermediate layer suddenly increased in the South Adriatic from the end of 2011, exactly when they started decreasing in the Aegean Sea. Moreover, Pylos data used together with additional ones (e.g. Absolute dynamic topography, temperature and salinity data from other platforms) collected

  6. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  7. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  8. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  9. The Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  10. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    Science.gov (United States)

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  11. Backup flexibility classes in emerging large-scale renewable electricity systems

    International Nuclear Information System (INIS)

    Schlachtberger, D.P.; Becker, S.; Schramm, S.; Greiner, M.

    2016-01-01

    Highlights: • Flexible backup demand in a European wind and solar based power system is modelled. • Three flexibility classes are defined based on production and consumption timescales. • Seasonal backup capacities are shown to be only used below 50% renewable penetration. • Large-scale transmission between countries can reduce fast flexible capacities. - Abstract: High shares of intermittent renewable power generation in a European electricity system will require flexible backup power generation on the dominant diurnal, synoptic, and seasonal weather timescales. The same three timescales are already covered by today’s dispatchable electricity generation facilities, which are able to follow the typical load variations on the intra-day, intra-week, and seasonal timescales. This work aims to quantify the changing demand for those three backup flexibility classes in emerging large-scale electricity systems, as they transform from low to high shares of variable renewable power generation. A weather-driven modelling is used, which aggregates eight years of wind and solar power generation data as well as load data over Germany and Europe, and splits the backup system required to cover the residual load into three flexibility classes distinguished by their respective maximum rates of change of power output. This modelling shows that the slowly flexible backup system is dominant at low renewable shares, but its optimized capacity decreases and drops close to zero once the average renewable power generation exceeds 50% of the mean load. The medium flexible backup capacities increase for modest renewable shares, peak at around a 40% renewable share, and then continuously decrease to almost zero once the average renewable power generation becomes larger than 100% of the mean load. The dispatch capacity of the highly flexible backup system becomes dominant for renewable shares beyond 50%, and reach their maximum around a 70% renewable share. For renewable shares

  12. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  13. Managing Risk and Uncertainty in Large-Scale University Research Projects

    Science.gov (United States)

    Moore, Sharlissa; Shangraw, R. F., Jr.

    2011-01-01

    Both publicly and privately funded research projects managed by universities are growing in size and scope. Complex, large-scale projects (over $50 million) pose new management challenges and risks for universities. This paper explores the relationship between project success and a variety of factors in large-scale university projects. First, we…

  14. Variability of Power from Large-Scale Solar Photovoltaic Scenarios in the State of Gujarat: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, B.; Hummon, M.; Cochran, J.; Stoltenberg, B.; Batra, P.; Mehta, B.; Patel, D.

    2014-04-01

    India has ambitious goals for high utilization of variable renewable power from wind and solar, and deployment has been proceeding at a rapid pace. The western state of Gujarat currently has the largest amount of solar generation of any Indian state, with over 855 Megawatts direct current (MWDC). Combined with over 3,240 MW of wind, variable generation renewables comprise nearly 18% of the electric-generating capacity in the state. A new historic 10-kilometer (km) gridded solar radiation data set capturing hourly insolation values for 2002-2011 is available for India. We apply an established method for downscaling hourly irradiance data to one-minute irradiance data at potential PV power production locations for one year, 2006. The objective of this report is to characterize the intra-hour variability of existing and planned photovoltaic solar power generation in the state of Gujarat (a total of 1.9 gigawatts direct current (GWDC)), and of five possible expansion scenarios of solar generation that reflect a range of geographic diversity (each scenario totals 500-1,000 MW of additional solar capacity). The report statistically analyzes one year's worth of power variability data, applied to both the baseline and expansion scenarios, to evaluate diurnal and seasonal power fluctuations, different timescales of variability (e.g., from one to 15 minutes), the magnitude of variability (both total megawatts and relative to installed solar capacity), and the extent to which the variability can be anticipated in advance. The paper also examines how Gujarat Energy Transmission Corporation (GETCO) and the Gujarat State Load Dispatch Centre (SLDC) could make use of the solar variability profiles in grid operations and planning.

  15. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  16. Large Scale Relationship between Aquatic Insect Traits and Climate.

    Science.gov (United States)

    Bhowmik, Avit Kumar; Schäfer, Ralf B

    2015-01-01

    Climate is the predominant environmental driver of freshwater assemblage pattern on large spatial scales, and traits of freshwater organisms have shown considerable potential to identify impacts of climate change. Although several studies suggest traits that may indicate vulnerability to climate change, the empirical relationship between freshwater assemblage trait composition and climate has been rarely examined on large scales. We compared the responses of the assumed climate-associated traits from six grouping features to 35 bioclimatic indices (~18 km resolution) for five insect orders (Diptera, Ephemeroptera, Odonata, Plecoptera and Trichoptera), evaluated their potential for changing distribution pattern under future climate change and identified the most influential bioclimatic indices. The data comprised 782 species and 395 genera sampled in 4,752 stream sites during 2006 and 2007 in Germany (~357,000 km² spatial extent). We quantified the variability and spatial autocorrelation in the traits and orders that are associated with the combined and individual bioclimatic indices. Traits of temperature preference grouping feature that are the products of several other underlying climate-associated traits, and the insect order Ephemeroptera exhibited the strongest response to the bioclimatic indices as well as the highest potential for changing distribution pattern. Regarding individual traits, insects in general and ephemeropterans preferring very cold temperature showed the highest response, and the insects preferring cold and trichopterans preferring moderate temperature showed the highest potential for changing distribution. We showed that the seasonal radiation and moisture are the most influential bioclimatic aspects, and thus changes in these aspects may affect the most responsive traits and orders and drive a change in their spatial distribution pattern. Our findings support the development of trait-based metrics to predict and detect climate

  17. Association of Taiwan’s Rainfall Patterns with Large-Scale Oceanic and Atmospheric Phenomena

    Directory of Open Access Journals (Sweden)

    Yi-Chun Kuo

    2016-01-01

    Full Text Available A 50-year (1960–2009 monthly rainfall gridded dataset produced by the Taiwan Climate Change Projection and Information Platform Project was presented in this study. The gridded data (5 × 5 km displayed influence of topography on spatial variability of rainfall, and the results of the empirical orthogonal functions (EOFs analysis revealed the patterns associated with the large-scale sea surface temperature variability over Pacific. The first mode (65% revealed the annual peaks of large rainfall in the southwestern mountainous area, which is associated with southwest monsoons and typhoons during summertime. The second temporal EOF mode (16% revealed the rainfall variance associated with the monsoon and its interaction with the slopes of the mountain range. This pattern is the major contributor to spatial variance of rainfall in Taiwan, as indicated by the first mode (40% of spatial variance EOF analysis. The second temporal EOF mode correlated with the El Niño Southern Oscillation (ENSO. In particular, during the autumn of the La Niña years following the strong El Niño years, the time-varying amplitude was substantially greater than that of normal years. The third temporal EOF mode (7% revealed a north-south out-of-phase rainfall pattern, the slowly evolving variations of which were in phase with the Pacific Decadal Oscillation. Because of Taiwan’s geographic location and the effect of local terrestrial structures, climate variability related to ENSO differed markedly from other regions in East Asia.

  18. A Variable Stiffness Analysis Model for Large Complex Thin-Walled Guide Rail

    Directory of Open Access Journals (Sweden)

    Wang Xiaolong

    2016-01-01

    Full Text Available Large complex thin-walled guide rail has complicated structure and no uniform low rigidity. The traditional cutting simulations are time consuming due to huge computation especially in large workpiece. To solve these problems, a more efficient variable stiffness analysis model has been propose, which can obtain quantitative stiffness value of the machining surface. Applying simulate cutting force in sampling points using finite element analysis software ABAQUS, the single direction variable stiffness rule can be obtained. The variable stiffness matrix has been propose by analyzing multi-directions coupling variable stiffness rule. Combining with the three direction cutting force value, the reasonability of existing processing parameters can be verified and the optimized cutting parameters can be designed.

  19. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  20. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  1. Distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm for deployment of wireless sensor networks

    DEFF Research Database (Denmark)

    Cao, Bin; Zhao, Jianwei; Yang, Po

    2018-01-01

    -objective evolutionary algorithms the Cooperative Coevolutionary Generalized Differential Evolution 3, the Cooperative Multi-objective Differential Evolution and the Nondominated Sorting Genetic Algorithm III, the proposed algorithm addresses the deployment optimization problem efficiently and effectively.......Using immune algorithms is generally a time-intensive process especially for problems with a large number of variables. In this paper, we propose a distributed parallel cooperative coevolutionary multi-objective large-scale immune algorithm that is implemented using the message passing interface...... (MPI). The proposed algorithm is composed of three layers: objective, group and individual layers. First, for each objective in the multi-objective problem to be addressed, a subpopulation is used for optimization, and an archive population is used to optimize all the objectives. Second, the large...

  2. 20th century trends of drought conditions in the Mediterranean: the influence of large-scale circulation patterns.

    Science.gov (United States)

    Sousa, Pedro; Trigo, Ricardo; Garcia-Herrera, Ricardo

    2010-05-01

    Here we have used the Self Calibrated PDSI (scPDSI) proposed by Wells et al (2004) as a more appropriate approach to characterize drought conditions in the Mediterranean area. The scPDSI has been shown to perform better (than the original PDSI) when evaluating spatial and temporal drought characteristics for regions outside the USA (Schrier et al, 2005). Seasonal and annual trends for the 1901-2000, 1901-1950 and 1951-2000 periods were computed using the standard Mann-Kendall test for trend significance evaluation. However, statistical significance obtained with this test can be highly misleading because it does not take into account the low variability nature that dominates the seasonal evolution of scPDSI fields. We have now improved these results by employing a modified Mann-Kendall test for auto-correlated series (Hamed and Ramachandra, 1997), such as the scPDSI case. This development allowed for a better definition of the Mediterranean areas characterized by significant changes in the scPDSI, namely the largely negative trends that dominate the Mediterranean basin, with the exceptions of parts of eastern Turkey and northwestern Iberia, since initially these areas were overestimated. The spatio-temporal variability of these indices was evaluated with an EOF analysis, in order to reduce the large dimensionality of the fields under analysis. Spatial representation of the first EOF patterns shows that EOF 1 covers the entire Mediterranean basin (16.4% of EV), while EOF2 is dominated by a W-E dipole (10% EV). The following EOF patterns present smaller scale features, and explain smaller amounts of variance. The EOF patterns have also facilitated the definition of four sub-regions with large socio-economic relevance: 1) Iberia, 2) Italian Peninsula, 3) Balkans and 4) Turkey. Afterwards we perform a comprehensive analysis on the links between the scPDSI and the large-scale atmospheric circulation indices that affect the Mediterranean basin, namely; NAO, EA, and SCAND

  3. Design study on sodium cooled large-scale reactor

    International Nuclear Information System (INIS)

    Murakami, Tsutomu; Hishida, Masahiko; Kisohara, Naoyuki

    2004-07-01

    In Phase 1 of the 'Feasibility Studies on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2, design improvement for further cost reduction of establishment of the plant concept has been performed. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2003, which is the third year of Phase 2. In the JFY2003 design study, critical subjects related to safety, structural integrity and thermal hydraulics which found in the last fiscal year has been examined and the plant concept has been modified. Furthermore, fundamental specifications of main systems and components have been set and economy has been evaluated. In addition, as the interim evaluation of the candidate concept of the FBR fuel cycle is to be conducted, cost effectiveness and achievability for the development goal were evaluated and the data of the three large-scale reactor candidate concepts were prepared. As a results of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000 yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  4. Design study on sodium-cooled large-scale reactor

    International Nuclear Information System (INIS)

    Shimakawa, Yoshio; Nibe, Nobuaki; Hori, Toru

    2002-05-01

    In Phase 1 of the 'Feasibility Study on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2 of the F/S, it is planed to precede a preliminary conceptual design of a sodium-cooled large-scale reactor based on the design of the advanced loop type reactor. Through the design study, it is intended to construct such a plant concept that can show its attraction and competitiveness as a commercialized reactor. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2001, which is the first year of Phase 2. In the JFY2001 design study, a plant concept has been constructed based on the design of the advanced loop type reactor, and fundamental specifications of main systems and components have been set. Furthermore, critical subjects related to safety, structural integrity, thermal hydraulics, operability, maintainability and economy have been examined and evaluated. As a result of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  5. Large scale CMB anomalies from thawing cosmic strings

    Energy Technology Data Exchange (ETDEWEB)

    Ringeval, Christophe [Centre for Cosmology, Particle Physics and Phenomenology, Institute of Mathematics and Physics, Louvain University, 2 Chemin du Cyclotron, 1348 Louvain-la-Neuve (Belgium); Yamauchi, Daisuke; Yokoyama, Jun' ichi [Research Center for the Early Universe (RESCEU), Graduate School of Science, The University of Tokyo, Tokyo 113-0033 (Japan); Bouchet, François R., E-mail: christophe.ringeval@uclouvain.be, E-mail: yamauchi@resceu.s.u-tokyo.ac.jp, E-mail: yokoyama@resceu.s.u-tokyo.ac.jp, E-mail: bouchet@iap.fr [Institut d' Astrophysique de Paris, UMR 7095-CNRS, Université Pierre et Marie Curie, 98bis boulevard Arago, 75014 Paris (France)

    2016-02-01

    Cosmic strings formed during inflation are expected to be either diluted over super-Hubble distances, i.e., invisible today, or to have crossed our past light cone very recently. We discuss the latter situation in which a few strings imprint their signature in the Cosmic Microwave Background (CMB) Anisotropies after recombination. Being almost frozen in the Hubble flow, these strings are quasi static and evade almost all of the previously derived constraints on their tension while being able to source large scale anisotropies in the CMB sky. Using a local variance estimator on thousand of numerically simulated Nambu-Goto all sky maps, we compute the expected signal and show that it can mimic a dipole modulation at large angular scales while being negligible at small angles. Interestingly, such a scenario generically produces one cold spot from the thawing of a cosmic string loop. Mixed with anisotropies of inflationary origin, we find that a few strings of tension GU = O(1) × 10{sup −6} match the amplitude of the dipole modulation reported in the Planck satellite measurements and could be at the origin of other large scale anomalies.

  6. The trend of the multi-scale temporal variability of precipitation in Colorado River Basin

    Science.gov (United States)

    Jiang, P.; Yu, Z.

    2011-12-01

    Hydrological problems like estimation of flood and drought frequencies under future climate change are not well addressed as a result of the disability of current climate models to provide reliable prediction (especially for precipitation) shorter than 1 month. In order to assess the possible impacts that multi-scale temporal distribution of precipitation may have on the hydrological processes in Colorado River Basin (CRB), a comparative analysis of multi-scale temporal variability of precipitation as well as the trend of extreme precipitation is conducted in four regions controlled by different climate systems. Multi-scale precipitation variability including within-storm patterns and intra-annual, inter-annual and decadal variabilities will be analyzed to explore the possible trends of storm durations, inter-storm periods, average storm precipitation intensities and extremes under both long-term natural climate variability and human-induced warming. Further more, we will examine the ability of current climate models to simulate the multi-scale temporal variability and extremes of precipitation. On the basis of these analyses, a statistical downscaling method will be developed to disaggregate the future precipitation scenarios which will provide a more reliable and finer temporal scale precipitation time series for hydrological modeling. Analysis results and downscaling results will be presented.

  7. Multi-scale climate modelling over Southern Africa using a variable-resolution global model

    CSIR Research Space (South Africa)

    Engelbrecht, FA

    2011-12-01

    Full Text Available -mail: fengelbrecht@csir.co.za Multi-scale climate modelling over Southern Africa using a variable-resolution global model FA Engelbrecht1, 2*, WA Landman1, 3, CJ Engelbrecht4, S Landman5, MM Bopape1, B Roux6, JL McGregor7 and M Thatcher7 1 CSIR Natural... improvement. Keywords: multi-scale climate modelling, variable-resolution atmospheric model Introduction Dynamic climate models have become the primary tools for the projection of future climate change, at both the global and regional scales. Dynamic...

  8. The viability of balancing wind generation with large scale energy storage

    International Nuclear Information System (INIS)

    Nyamdash, Batsaikhan; Denny, Eleanor; O'Malley, Mark

    2010-01-01

    This paper studies the impact of combining wind generation and dedicated large scale energy storage on the conventional thermal plant mix and the CO 2 emissions of a power system. Different strategies are proposed here in order to explore the best operational strategy for the wind and storage system in terms of its effect on the net load. Furthermore, the economic viability of combining wind and large scale storage is studied. The empirical application, using data for the Irish power system, shows that combined wind and storage reduces the participation of mid-merit plants and increases the participation of base-load plants. Moreover, storage negates some of the CO 2 emissions reduction of the wind generation. It was also found that the wind and storage output can significantly reduce the variability of the net load under certain operational strategies and the optimal strategy depends on the installed wind capacity. However, in the absence of any supporting mechanism none of the storage devices were economically viable when they were combined with the wind generation on the Irish power system. - Research Highlights: → Energy storage would displace the peaking and mid-merit plants generations by the base-load plants generations. Energy storage may negate the CO 2 emissions reduction that is due to the increased wind generations. →Energy storage reduces the variation of the net load. →Under certain market conditions, merchant type energy storage is not viable.

  9. The swan song in context: long-time-scale X-ray variability of NGC 4051

    Science.gov (United States)

    Uttley, P.; McHardy, I. M.; Papadakis, I. E.; Guainazzi, M.; Fruscione, A.

    1999-07-01

    On 1998 May 9-11, the highly variable, low-luminosity Seyfert 1 galaxy NGC 4051 was observed in an unusual low-flux state by BeppoSAX, RXTE and EUVE. We present fits of the 4-15keV RXTE spectrum and BeppoSAX MECS spectrum obtained during this observation, which are consistent with the interpretation that the source had switched off, leaving only the spectrum of pure reflection from distant cold matter. We place this result in context by showing the X-ray light curve of NGC 4051 obtained by our RXTE monitoring campaign over the past two and a half years, which shows that the low state lasted for ~150d before the May observations (implying that the reflecting material is >10^17cm from the continuum source) and forms part of a light curve showing distinct variations in long-term average flux over time-scales > months. We show that the long-time-scale component to X-ray variability is intrinsic to the primary continuum and is probably distinct from the variability at shorter time-scales. The long-time-scale component to variability maybe associated with variations in the accretion flow of matter on to the central black hole. As the source approaches the low state, the variability process becomes non-linear. NGC 4051 may represent a microcosm of all X-ray variability in radio-quiet active galactic nuclei (AGNs), displaying in a few years a variety of flux states and variability properties which more luminous AGNs may pass through on time-scales of decades to thousands of years.

  10. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  11. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  12. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  13. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  14. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif; Orakzai, Faisal Moeen; Abdelaziz, Ibrahim; Khayyat, Zuhair

    2017-01-01

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  15. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  16. Multilevel Latent Class Analysis for Large-Scale Educational Assessment Data: Exploring the Relation between the Curriculum and Students' Mathematical Strategies

    Science.gov (United States)

    Fagginger Auer, Marije F.; Hickendorff, Marian; Van Putten, Cornelis M.; Béguin, Anton A.; Heiser, Willem J.

    2016-01-01

    A first application of multilevel latent class analysis (MLCA) to educational large-scale assessment data is demonstrated. This statistical technique addresses several of the challenges that assessment data offers. Importantly, MLCA allows modeling of the often ignored teacher effects and of the joint influence of teacher and student variables.…

  17. Assessment of small-scale integrated water vapour variability during HOPE

    Science.gov (United States)

    Steinke, S.; Eikenberg, S.; Löhnert, U.; Dick, G.; Klocke, D.; Di Girolamo, P.; Crewell, S.

    2015-03-01

    The spatio-temporal variability of integrated water vapour (IWV) on small scales of less than 10 km and hours is assessed with data from the 2 months of the High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2) Observational Prototype Experiment (HOPE). The statistical intercomparison of the unique set of observations during HOPE (microwave radiometer (MWR), Global Positioning System (GPS), sun photometer, radiosondes, Raman lidar, infrared and near-infrared Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellites Aqua and Terra) measuring close together reveals a good agreement in terms of random differences (standard deviation ≤1 kg m-2) and correlation coefficient (≥ 0.98). The exception is MODIS, which appears to suffer from insufficient cloud filtering. For a case study during HOPE featuring a typical boundary layer development, the IWV variability in time and space on scales of less than 10 km and less than 1 h is investigated in detail. For this purpose, the measurements are complemented by simulations with the novel ICOsahedral Nonhydrostatic modelling framework (ICON), which for this study has a horizontal resolution of 156 m. These runs show that differences in space of 3-4 km or time of 10-15 min induce IWV variabilities on the order of 0.4 kg m-2. This model finding is confirmed by observed time series from two MWRs approximately 3 km apart with a comparable temporal resolution of a few seconds. Standard deviations of IWV derived from MWR measurements reveal a high variability (> 1 kg m-2) even at very short time scales of a few minutes. These cannot be captured by the temporally lower-resolved instruments and by operational numerical weather prediction models such as COSMO-DE (an application of the Consortium for Small-scale Modelling covering Germany) of Deutscher Wetterdienst, which is included in the comparison. However, for time scales larger than 1 h, a sampling resolution of 15 min is

  18. Large-scale perturbations from the waterfall field in hybrid inflation

    International Nuclear Information System (INIS)

    Fonseca, José; Wands, David; Sasaki, Misao

    2010-01-01

    We estimate large-scale curvature perturbations from isocurvature fluctuations in the waterfall field during hybrid inflation, in addition to the usual inflaton field perturbations. The tachyonic instability at the end of inflation leads to an explosive growth of super-Hubble scale perturbations, but they retain the steep blue spectrum characteristic of vacuum fluctuations in a massive field during inflation. The power spectrum thus peaks around the Hubble-horizon scale at the end of inflation. We extend the usual δN formalism to include the essential role of these small fluctuations when estimating the large-scale curvature perturbation. The resulting curvature perturbation due to fluctuations in the waterfall field is second-order and the spectrum is expected to be of order 10 −54 on cosmological scales

  19. Decoupling local mechanics from large-scale structure in modular metamaterials

    Science.gov (United States)

    Yang, Nan; Silverberg, Jesse L.

    2017-04-01

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  20. The origin of large scale cosmic structure

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Palmer, P.L.

    1985-01-01

    The paper concerns the origin of large scale cosmic structure. The evolution of density perturbations, the nonlinear regime (Zel'dovich's solution and others), the Gott and Rees clustering hierarchy, the spectrum of condensations, and biassed galaxy formation, are all discussed. (UK)

  1. Small Scale Variability and the Problem of Data Validation

    Science.gov (United States)

    Sparling, L. C.; Avallone, L.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Numerous measurements taken with a variety of airborne, balloon borne and ground based instruments over the past decade have revealed a complex multiscaled 3D structure in both chemical and dynamical fields in the upper troposphere/lower stratosphere. The variability occurs on scales that are well below the resolution of satellite measurements, leading to problems in measurement validation. We discuss some statistical ideas that can shed some light on the contribution of the natural variability to the inevitable differences in correlative measurements that are not strictly colocated, or that have different spatial resolution.

  2. A practical process for light-water detritiation at large scales

    Energy Technology Data Exchange (ETDEWEB)

    Boniface, H.A. [Atomic Energy of Canada Limited, Chalk River, ON (Canada); Robinson, J., E-mail: jr@tyne-engineering.com [Tyne Engineering, Burlington, ON (Canada); Gnanapragasam, N.V.; Castillo, I.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    AECL and Tyne Engineering have recently completed a preliminary engineering design for a modest-scale tritium removal plant for light water, intended for installation at AECL's Chalk River Laboratories (CRL). This plant design was based on the Combined Electrolysis and Catalytic Exchange (CECE) technology developed at CRL over many years and demonstrated there and elsewhere. The general features and capabilities of this design have been reported as well as the versatility of the design for separating any pair of the three hydrogen isotopes. The same CECE technology could be applied directly to very large-scale wastewater detritiation, such as the case at Fukushima Daiichi Nuclear Power Station. However, since the CECE process scales linearly with throughput, the required capital and operating costs are substantial for such large-scale applications. This paper discusses some options for reducing the costs of very large-scale detritiation. Options include: Reducing tritium removal effectiveness; Energy recovery; Improving the tolerance of impurities; Use of less expensive or more efficient equipment. A brief comparison with alternative processes is also presented. (author)

  3. The use of test scores from large-scale assessment surveys: psychometric and statistical considerations

    Directory of Open Access Journals (Sweden)

    Henry Braun

    2017-11-01

    Full Text Available Abstract Background Economists are making increasing use of measures of student achievement obtained through large-scale survey assessments such as NAEP, TIMSS, and PISA. The construction of these measures, employing plausible value (PV methodology, is quite different from that of the more familiar test scores associated with assessments such as the SAT or ACT. These differences have important implications both for utilization and interpretation. Although much has been written about PVs, it appears that there are still misconceptions about whether and how to employ them in secondary analyses. Methods We address a range of technical issues, including those raised in a recent article that was written to inform economists using these databases. First, an extensive review of the relevant literature was conducted, with particular attention to key publications that describe the derivation and psychometric characteristics of such achievement measures. Second, a simulation study was carried out to compare the statistical properties of estimates based on the use of PVs with those based on other, commonly used methods. Results It is shown, through both theoretical analysis and simulation, that under fairly general conditions appropriate use of PV yields approximately unbiased estimates of model parameters in regression analyses of large scale survey data. The superiority of the PV methodology is particularly evident when measures of student achievement are employed as explanatory variables. Conclusions The PV methodology used to report student test performance in large scale surveys remains the state-of-the-art for secondary analyses of these databases.

  4. A Decentralized Multivariable Robust Adaptive Voltage and Speed Regulator for Large-Scale Power Systems

    Science.gov (United States)

    Okou, Francis A.; Akhrif, Ouassima; Dessaint, Louis A.; Bouchard, Derrick

    2013-05-01

    This papter introduces a decentralized multivariable robust adaptive voltage and frequency regulator to ensure the stability of large-scale interconnnected generators. Interconnection parameters (i.e. load, line and transormer parameters) are assumed to be unknown. The proposed design approach requires the reformulation of conventiaonal power system models into a multivariable model with generator terminal voltages as state variables, and excitation and turbine valve inputs as control signals. This model, while suitable for the application of modern control methods, introduces problems with regards to current design techniques for large-scale systems. Interconnection terms, which are treated as perturbations, do not meet the common matching condition assumption. A new adaptive method for a certain class of large-scale systems is therefore introduces that does not require the matching condition. The proposed controller consists of nonlinear inputs that cancel some nonlinearities of the model. Auxiliary controls with linear and nonlinear components are used to stabilize the system. They compensate unknown parametes of the model by updating both the nonlinear component gains and excitation parameters. The adaptation algorithms involve the sigma-modification approach for auxiliary control gains, and the projection approach for excitation parameters to prevent estimation drift. The computation of the matrix-gain of the controller linear component requires the resolution of an algebraic Riccati equation and helps to solve the perturbation-mismatching problem. A realistic power system is used to assess the proposed controller performance. The results show that both stability and transient performance are considerably improved following a severe contingency.

  5. OffshoreDC DC grids for integration of large scale wind power

    DEFF Research Database (Denmark)

    Zeni, Lorenzo; Endegnanew, Atsede Gualu; Stamatiou, Georgios

    The present report summarizes the main findings of the Nordic Energy Research project “DC grids for large scale integration of offshore wind power – OffshoreDC”. The project is been funded by Nordic Energy Research through the TFI programme and was active between 2011 and 2016. The overall...... objective of the project was to drive the development of the VSC based HVDC technology for future large scale offshore grids, supporting a standardised and commercial development of the technology, and improving the opportunities for the technology to support power system integration of large scale offshore...

  6. Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications

    Directory of Open Access Journals (Sweden)

    Kun Qian

    2014-01-01

    Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.

  7. Time-scale and extent at which large-scale circulation modes determine the wind and solar potential in the Iberian Peninsula

    International Nuclear Information System (INIS)

    Jerez, Sonia; Trigo, Ricardo M

    2013-01-01

    The North Atlantic Oscillation (NAO), the East Atlantic (EA) and the Scandinavian (SCAND) modes are the three main large-scale circulation patterns driving the climate variability of the Iberian Peninsula. This study assesses their influence in terms of solar (photovoltaic) and wind power generation potential (SP and WP) and evaluates their skill as predictors. For that we use a hindcast regional climate simulation to retrieve the primary meteorological variables involved, surface solar radiation and wind speed. First we identify that the maximum influence of the various modes occurs on the interannual variations of the monthly mean SP and WP series, being generally more relevant in winter. Second we find that in this time-scale and season, SP (WP) varies up to 30% (40%) with respect to the mean climatology between years with opposite phases of the modes, although the strength and the spatial distribution of the signals differ from one month to another. Last, the skill of a multi-linear regression model (MLRM), built using the NAO, EA and SCAND indices, to reconstruct the original wintertime monthly series of SP and WP was investigated. The reconstructed series (when the MLRM is calibrated for each month individually) correlate with the original ones up to 0.8 at the interannual time-scale. Besides, when the modeled series for each individual month are merged to construct an October-to-March monthly series, and after removing the annual cycle in order to account for monthly anomalies, these correlate 0.65 (0.55) with the original SP (WP) series in average. These values remain fairly stable when the calibration and reconstruction periods differ, thus supporting up to a point the predictive potential of the method at the time-scale assessed here. (letter)

  8. Plague and Climate: Scales Matter

    Science.gov (United States)

    Ben Ari, Tamara; Neerinckx, Simon; Gage, Kenneth L.; Kreppel, Katharina; Laudisoit, Anne; Leirs, Herwig; Stenseth, Nils Chr.

    2011-01-01

    Plague is enzootic in wildlife populations of small mammals in central and eastern Asia, Africa, South and North America, and has been recognized recently as a reemerging threat to humans. Its causative agent Yersinia pestis relies on wild rodent hosts and flea vectors for its maintenance in nature. Climate influences all three components (i.e., bacteria, vectors, and hosts) of the plague system and is a likely factor to explain some of plague's variability from small and regional to large scales. Here, we review effects of climate variables on plague hosts and vectors from individual or population scales to studies on the whole plague system at a large scale. Upscaled versions of small-scale processes are often invoked to explain plague variability in time and space at larger scales, presumably because similar scale-independent mechanisms underlie these relationships. This linearity assumption is discussed in the light of recent research that suggests some of its limitations. PMID:21949648

  9. The effective field theory of cosmological large scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco, John Joseph M. [Stanford Univ., Stanford, CA (United States); Hertzberg, Mark P. [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States); Senatore, Leonardo [Stanford Univ., Stanford, CA (United States); SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  10. Temporal flexibility and careers: The role of large-scale organizations for physicians

    OpenAIRE

    Forrest Briscoe

    2006-01-01

    Temporal flexibility and careers: The role of large-scale organizations for physicians. Forrest Briscoe Briscoe This study investigates how employment in large-scale organizations affects the work lives of practicing physicians. Well-established theory associates larger organizations with bureaucratic constraint, loss of workplace control, and dissatisfaction, but this author finds that large scale is also associated with greater schedule and career flexibility. Ironically, the bureaucratic p...

  11. The role of large scale motions on passive scalar transport

    Science.gov (United States)

    Dharmarathne, Suranga; Araya, Guillermo; Tutkun, Murat; Leonardi, Stefano; Castillo, Luciano

    2014-11-01

    We study direct numerical simulation (DNS) of turbulent channel flow at Reτ = 394 to investigate effect of large scale motions on fluctuating temperature field which forms a passive scalar field. Statistical description of the large scale features of the turbulent channel flow is obtained using two-point correlations of velocity components. Two-point correlations of fluctuating temperature field is also examined in order to identify possible similarities between velocity and temperature fields. The two-point cross-correlations betwen the velocity and temperature fluctuations are further analyzed to establish connections between these two fields. In addition, we use proper orhtogonal decompotion (POD) to extract most dominant modes of the fields and discuss the coupling of large scale features of turbulence and the temperature field.

  12. Signatures of non-universal large scales in conditional structure functions from various turbulent flows

    International Nuclear Information System (INIS)

    Blum, Daniel B; Voth, Greg A; Bewley, Gregory P; Bodenschatz, Eberhard; Gibert, Mathieu; Xu Haitao; Gylfason, Ármann; Mydlarski, Laurent; Yeung, P K

    2011-01-01

    We present a systematic comparison of conditional structure functions in nine turbulent flows. The flows studied include forced isotropic turbulence simulated on a periodic domain, passive grid wind tunnel turbulence in air and in pressurized SF 6 , active grid wind tunnel turbulence (in both synchronous and random driving modes), the flow between counter-rotating discs, oscillating grid turbulence and the flow in the Lagrangian exploration module (in both constant and random driving modes). We compare longitudinal Eulerian second-order structure functions conditioned on the instantaneous large-scale velocity in each flow to assess the ways in which the large scales affect the small scales in a variety of turbulent flows. Structure functions are shown to have larger values when the large-scale velocity significantly deviates from the mean in most flows, suggesting that dependence on the large scales is typical in many turbulent flows. The effects of the large-scale velocity on the structure functions can be quite strong, with the structure function varying by up to a factor of 2 when the large-scale velocity deviates from the mean by ±2 standard deviations. In several flows, the effects of the large-scale velocity are similar at all the length scales we measured, indicating that the large-scale effects are scale independent. In a few flows, the effects of the large-scale velocity are larger on the smallest length scales. (paper)

  13. Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.

    Science.gov (United States)

    Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S

    2016-09-26

    In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Variability of the Magnetic Field Power Spectrum in the Solar Wind at Electron Scales

    Science.gov (United States)

    Roberts, Owen Wyn; Alexandrova, O.; Kajdič, P.; Turc, L.; Perrone, D.; Escoubet, C. P.; Walsh, A.

    2017-12-01

    At electron scales, the power spectrum of solar-wind magnetic fluctuations can be highly variable and the dissipation mechanisms of the magnetic energy into the various particle species is under debate. In this paper, we investigate data from the Cluster mission’s STAFF Search Coil magnetometer when the level of turbulence is sufficiently high that the morphology of the power spectrum at electron scales can be investigated. The Cluster spacecraft sample a disturbed interval of plasma where two streams of solar wind interact. Meanwhile, several discontinuities (coherent structures) are seen in the large-scale magnetic field, while at small scales several intermittent bursts of wave activity (whistler waves) are present. Several different morphologies of the power spectrum can be identified: (1) two power laws separated by a break, (2) an exponential cutoff near the Taylor shifted electron scales, and (3) strong spectral knees at the Taylor shifted electron scales. These different morphologies are investigated by using wavelet coherence, showing that, in this interval, a clear break and strong spectral knees are features that are associated with sporadic quasi parallel propagating whistler waves, even for short times. On the other hand, when no signatures of whistler waves at ∼ 0.1{--}0.2{f}{ce} are present, a clear break is difficult to find and the spectrum is often more characteristic of a power law with an exponential cutoff.

  15. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  16. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  17. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  18. Generation of large-scale vortives in compressible helical turbulence

    International Nuclear Information System (INIS)

    Chkhetiani, O.G.; Gvaramadze, V.V.

    1989-01-01

    We consider generation of large-scale vortices in compressible self-gravitating turbulent medium. The closed equation describing evolution of the large-scale vortices in helical turbulence with finite correlation time is obtained. This equation has the form similar to the hydromagnetic dynamo equation, which allows us to call the vortx genertation effect the vortex dynamo. It is possible that principally the same mechanism is responsible both for amplification and maintenance of density waves and magnetic fields in gaseous disks of spiral galaxies. (author). 29 refs

  19. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  20. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    Science.gov (United States)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  1. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  2. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  3. Large-scale ground motion simulation using GPGPU

    Science.gov (United States)

    Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.

    2012-12-01

    Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number

  4. Soft X-ray Emission from Large-Scale Galactic Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S.; O'Dea, C.; Veilleux, S.

    1998-01-01

    Kiloparsec-scale soft X-ray nebulae extend along the galaxy minor axes in several Seyfert galaxies, including NGC 2992, NGC 4388 and NGC 5506. In these three galaxies, the extended X-ray emission observed in ROSAT HRI images has 0.2-2.4 keV X-ray luminosities of 0.4-3.5 x 10(40) erg s(-1) . The X-ray nebulae are roughly co-spatial with the large-scale radio emission, suggesting that both are produced by large-scale galactic outflows. Assuming pressure balance between the radio and X-ray plasmas, the X-ray filling factor is >~ 10(4) times as large as the radio plasma filling factor, suggesting that large-scale outflows in Seyfert galaxies are predominantly winds of thermal X-ray emitting gas. We favor an interpretation in which large-scale outflows originate as AGN-driven jets that entrain and heat gas on kpc scales as they make their way out of the galaxy. AGN- and starburst-driven winds are also possible explanations if the winds are oriented along the rotation axis of the galaxy disk. Since large-scale outflows are present in at least 50 percent of Seyfert galaxies, the soft X-ray emission from the outflowing gas may, in many cases, explain the ``soft excess" X-ray feature observed below 2 keV in X-ray spectra of many Seyfert 2 galaxies.

  5. Pro website development and operations streamlining DevOps for large-scale websites

    CERN Document Server

    Sacks, Matthew

    2012-01-01

    Pro Website Development and Operations gives you the experience you need to create and operate a large-scale production website. Large-scale websites have their own unique set of problems regarding their design-problems that can get worse when agile methodologies are adopted for rapid results. Managing large-scale websites, deploying applications, and ensuring they are performing well often requires a full scale team involving the development and operations sides of the company-two departments that don't always see eye to eye. When departments struggle with each other, it adds unnecessary comp

  6. Non-linear variability in geophysics scaling and fractals

    CERN Document Server

    Lovejoy, S

    1991-01-01

    consequences of broken symmetry -here parity-is studied. In this model, turbulence is dominated by a hierarchy of helical (corkscrew) structures. The authors stress the unique features of such pseudo-scalar cascades as well as the extreme nature of the resulting (intermittent) fluctuations. Intermittent turbulent cascades was also the theme of a paper by us in which we show that universality classes exist for continuous cascades (in which an infinite number of cascade steps occur over a finite range of scales). This result is the multiplicative analogue of the familiar central limit theorem for the addition of random variables. Finally, an interesting paper by Pasmanter investigates the scaling associated with anomolous diffusion in a chaotic tidal basin model involving a small number of degrees of freedom. Although the statistical literature is replete with techniques for dealing with those random processes characterized by both exponentially decaying (non-scaling) autocorrelations and exponentially decaying...

  7. Neutrinos and large-scale structure

    International Nuclear Information System (INIS)

    Eisenstein, Daniel J.

    2015-01-01

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos

  8. Neutrinos and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, Daniel J. [Daniel J. Eisenstein, Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS #20, Cambridge, MA 02138 (United States)

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  9. Evaluation of Large-scale Public Sector Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Hanne Foss

    2017-01-01

    and more delimited policy areas take place. In our analysis we apply four governance perspectives (rational-instrumental, rational-interest based, institutional-cultural and a chaos perspective) in a comparative analysis of the evaluations of two large-scale public sector reforms in Denmark and Norway. We...

  10. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  11. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  12. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    breaching. Our results demonstrate that violent, millennial-scale storms can trigger significant large-scale and long-term changes on barrier coasts, and that coastal changes assumed to take place over centuries or even millennia may occur in association with a single extreme storm event....

  13. Large-eddy simulation with accurate implicit subgrid-scale diffusion

    NARCIS (Netherlands)

    B. Koren (Barry); C. Beets

    1996-01-01

    textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is

  14. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  15. Macroecological factors explain large-scale spatial population patterns of ancient agriculturalists

    NARCIS (Netherlands)

    Xu, C.; Chen, B.; Abades, S.; Reino, L.; Teng, S.; Ljungqvist, F.C.; Huang, Z.Y.X.; Liu, X.

    2015-01-01

    Aim: It has been well demonstrated that the large-scale distribution patterns of numerous species are driven by similar macroecological factors. However, understanding of this topic remains limited when applied to our own species. Here we take a large-scale look at ancient agriculturalist

  16. Large Scale Investments in Infrastructure : Competing Policy regimes to Control Connections

    NARCIS (Netherlands)

    Otsuki, K.; Read, M.L.; Zoomers, E.B.

    2016-01-01

    This paper proposes to analyse implications of large-scale investments in physical infrastructure for social and environmental justice. While case studies on the global land rush and climate change have advanced our understanding of how large-scale investments in land, forests and water affect

  17. Rotation invariant fast features for large-scale recognition

    Science.gov (United States)

    Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd

    2012-10-01

    We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.

  18. Large-scale bioenergy production: how to resolve sustainability trade-offs?

    Science.gov (United States)

    Humpenöder, Florian; Popp, Alexander; Bodirsky, Benjamin Leon; Weindl, Isabelle; Biewald, Anne; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Klein, David; Kreidenweis, Ulrich; Müller, Christoph; Rolinski, Susanne; Stevanovic, Miodrag

    2018-02-01

    Large-scale 2nd generation bioenergy deployment is a key element of 1.5 °C and 2 °C transformation pathways. However, large-scale bioenergy production might have negative sustainability implications and thus may conflict with the Sustainable Development Goal (SDG) agenda. Here, we carry out a multi-criteria sustainability assessment of large-scale bioenergy crop production throughout the 21st century (300 EJ in 2100) using a global land-use model. Our analysis indicates that large-scale bioenergy production without complementary measures results in negative effects on the following sustainability indicators: deforestation, CO2 emissions from land-use change, nitrogen losses, unsustainable water withdrawals and food prices. One of our main findings is that single-sector environmental protection measures next to large-scale bioenergy production are prone to involve trade-offs among these sustainability indicators—at least in the absence of more efficient land or water resource use. For instance, if bioenergy production is accompanied by forest protection, deforestation and associated emissions (SDGs 13 and 15) decline substantially whereas food prices (SDG 2) increase. However, our study also shows that this trade-off strongly depends on the development of future food demand. In contrast to environmental protection measures, we find that agricultural intensification lowers some side-effects of bioenergy production substantially (SDGs 13 and 15) without generating new trade-offs—at least among the sustainability indicators considered here. Moreover, our results indicate that a combination of forest and water protection schemes, improved fertilization efficiency, and agricultural intensification would reduce the side-effects of bioenergy production most comprehensively. However, although our study includes more sustainability indicators than previous studies on bioenergy side-effects, our study represents only a small subset of all indicators relevant for the

  19. Large-scale structure in the universe: Theory vs observations

    International Nuclear Information System (INIS)

    Kashlinsky, A.; Jones, B.J.T.

    1990-01-01

    A variety of observations constrain models of the origin of large scale cosmic structures. We review here the elements of current theories and comment in detail on which of the current observational data provide the principal constraints. We point out that enough observational data have accumulated to constrain (and perhaps determine) the power spectrum of primordial density fluctuations over a very large range of scales. We discuss the theories in the light of observational data and focus on the potential of future observations in providing even (and ever) tighter constraints. (orig.)

  20. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  1. Mass-flux subgrid-scale parameterization in analogy with multi-component flows: a formulation towards scale independence

    Directory of Open Access Journals (Sweden)

    J.-I. Yano

    2012-11-01

    Full Text Available A generalized mass-flux formulation is presented, which no longer takes a limit of vanishing fractional areas for subgrid-scale components. The presented formulation is applicable to a~situation in which the scale separation is still satisfied, but fractional areas occupied by individual subgrid-scale components are no longer small. A self-consistent formulation is presented by generalizing the mass-flux formulation under the segmentally-constant approximation (SCA to the grid–scale variabilities. The present formulation is expected to alleviate problems arising from increasing resolutions of operational forecast models without invoking more extensive overhaul of parameterizations.

    The present formulation leads to an analogy of the large-scale atmospheric flow with multi-component flows. This analogy allows a generality of including any subgrid-scale variability into the mass-flux parameterization under SCA. Those include stratiform clouds as well as cold pools in the boundary layer.

    An important finding under the present formulation is that the subgrid-scale quantities are advected by the large-scale velocities characteristic of given subgrid-scale components (large-scale subcomponent flows, rather than by the total large-scale flows as simply defined by grid-box average. In this manner, each subgrid-scale component behaves as if like a component of multi-component flows. This formulation, as a result, ensures the lateral interaction of subgrid-scale variability crossing the grid boxes, which are missing in the current parameterizations based on vertical one-dimensional models, and leading to a reduction of the grid-size dependencies in its performance. It is shown that the large-scale subcomponent flows are driven by large-scale subcomponent pressure gradients. The formulation, as a result, furthermore includes a self-contained description of subgrid-scale momentum transport.

    The main purpose of the present paper

  2. The Landscape Evolution Observatory: a large-scale controllable infrastructure to study coupled Earth-surface processes

    Science.gov (United States)

    Pangle, Luke A.; DeLong, Stephen B.; Abramson, Nate; Adams, John; Barron-Gafford, Greg A.; Breshears, David D.; Brooks, Paul D.; Chorover, Jon; Dietrich, William E.; Dontsova, Katerina; Durcik, Matej; Espeleta, Javier; Ferré, T.P.A.; Ferriere, Regis; Henderson, Whitney; Hunt, Edward A.; Huxman, Travis E.; Millar, David; Murphy, Brendan; Niu, Guo-Yue; Pavao-Zuckerman, Mitch; Pelletier, Jon D.; Rasmussen, Craig; Ruiz, Joaquin; Saleska, Scott; Schaap, Marcel; Sibayan, Michael; Troch, Peter A.; Tuller, Markus; van Haren, Joost; Zeng, Xubin

    2015-01-01

    Zero-order drainage basins, and their constituent hillslopes, are the fundamental geomorphic unit comprising much of Earth's uplands. The convergent topography of these landscapes generates spatially variable substrate and moisture content, facilitating biological diversity and influencing how the landscape filters precipitation and sequesters atmospheric carbon dioxide. In light of these significant ecosystem services, refining our understanding of how these functions are affected by landscape evolution, weather variability, and long-term climate change is imperative. In this paper we introduce the Landscape Evolution Observatory (LEO): a large-scale controllable infrastructure consisting of three replicated artificial landscapes (each 330 m2 surface area) within the climate-controlled Biosphere 2 facility in Arizona, USA. At LEO, experimental manipulation of rainfall, air temperature, relative humidity, and wind speed are possible at unprecedented scale. The Landscape Evolution Observatory was designed as a community resource to advance understanding of how topography, physical and chemical properties of soil, and biological communities coevolve, and how this coevolution affects water, carbon, and energy cycles at multiple spatial scales. With well-defined boundary conditions and an extensive network of sensors and samplers, LEO enables an iterative scientific approach that includes numerical model development and virtual experimentation, physical experimentation, data analysis, and model refinement. We plan to engage the broader scientific community through public dissemination of data from LEO, collaborative experimental design, and community-based model development.

  3. Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector

    Science.gov (United States)

    Kumar, P.; Mishra, T.; Banerjee, R.

    2017-12-01

    India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.

  4. Use of a large-scale rainfall simulator reveals novel insights into stemflow generation

    Science.gov (United States)

    Levia, D. F., Jr.; Iida, S. I.; Nanko, K.; Sun, X.; Shinohara, Y.; Sakai, N.

    2017-12-01

    Detailed knowledge of stemflow generation and its effects on both hydrological and biogoechemical cycling is important to achieve a holistic understanding of forest ecosystems. Field studies and a smaller set of experiments performed under laboratory conditions have increased our process-based knowledge of stemflow production. Building upon these earlier works, a large-scale rainfall simulator was employed to deepen our understanding of stemflow generation processes. The use of the large-scale rainfall simulator provides a unique opportunity to examine a range of rainfall intensities under constant conditions that are difficult under natural conditions due to the variable nature of rainfall intensities in the field. Stemflow generation and production was examined for three species- Cryptomeria japonica D. Don (Japanese cedar), Chamaecyparis obtusa (Siebold & Zucc.) Endl. (Japanese cypress), Zelkova serrata Thunb. (Japanese zelkova)- under both leafed and leafless conditions at several different rainfall intensities (15, 20, 30, 40, 50, and 100 mm h-1) using a large-scale rainfall simulator in National Research Institute for Earth Science and Disaster Resilience (Tsukuba, Japan). Stemflow production and rates and funneling ratios were examined in relation to both rainfall intensity and canopy structure. Preliminary results indicate a dynamic and complex response of the funneling ratios of individual trees to different rainfall intensities among the species examined. This is partly the result of different canopy structures, hydrophobicity of vegetative surfaces, and differential wet-up processes across species and rainfall intensities. This presentation delves into these differences and attempts to distill them into generalizable patterns, which can advance our theories of stemflow generation processes and ultimately permit better stewardship of forest resources. ________________ Funding note: This research was supported by JSPS Invitation Fellowship for Research in

  5. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  6. Toward Instructional Leadership: Principals' Perceptions of Large-Scale Assessment in Schools

    Science.gov (United States)

    Prytula, Michelle; Noonan, Brian; Hellsten, Laurie

    2013-01-01

    This paper describes a study of the perceptions that Saskatchewan school principals have regarding large-scale assessment reform and their perceptions of how assessment reform has affected their roles as principals. The findings revealed that large-scale assessments, especially provincial assessments, have affected the principal in Saskatchewan…

  7. A large scale field experiment in the Amazon basin (LAMBADA/BATERISTA)

    NARCIS (Netherlands)

    Dolman, A.J.; Kabat, P.; Gash, J.H.C.; Noilhan, J.; Jochum, A.M.; Nobre, C.

    1995-01-01

    A description is given of a large-scale field experiment planned in the Amazon basin, aimed at assessing the large-scale balances of energy, water and carbon dioxide. The embedding of this experiment in global change programmes is described, viz. the Biospheric Aspects of the Hydrological Cycle

  8. Organisationally relevant variables and Keyes's Mental Health Continuum Scale: An exploratory study

    Directory of Open Access Journals (Sweden)

    Deo J.W. Strümpfer

    2009-09-01

    Full Text Available In an exploratory study on a sample of convenience (n = 165, 11 self-report variables with presumed organisational  relevance were  related,  as  predictors,  to  the  three  subscores  and  summed  score of  the Keyes  (2005a, 2005b; 2007 Mental Health Continuum  scale  (long  form. Keyes's  scale was administered five to seven days after the first set of scales. The predictor scores were reduced to three factorial scores, labelled positive orientation, negative orientation and positive striving. When classified thus, the predictor variables showed significant and meaningful relationships with some or all of the Keyes subscores and the total score, although few reached medium effect sizes.

  9. Climatological changing effects on wind, precipitation and erosion: Large, meso and small scale analysis

    International Nuclear Information System (INIS)

    Aslan, Z.

    2004-01-01

    The Fourier transformation analysis for monthly average values of meteorological parameters has been considered, and amplitudes, phase angles have been calculated by using ground measurements in Turkey. The first order harmonics of meteorological parameters show large scale effects, while higher order harmonics show the effects of small scale fluctuations. The variations of first through sixth order harmonic amplitudes and phases provide a useful means of understanding the large and local scale effects on meteorological parameters. The phase angle can be used to determine the time of year the maximum or minimum of a given harmonic occurs. The analysis helps us to distinguish different pressure, relative humidity, temperature, precipitation and wind speed regimes and transition regions. Local and large scale phenomenon and some unusual seasonal patterns are also defined near Keban Dam and the irrigation area. Analysis of precipitation based on long term data shows that semi-annual fluctuations are predominant in the study area. Similarly, pressure variations are mostly influenced by semi-annual fluctuations. Temperature and humidity variations are mostly influenced by meso and micro scale fluctuations. Many large and meso scale climate change simulations for the 21st century are based on concentration of green house gases. A better understanding of these effects on soil erosion is necessary to determine social, economic and other impacts of erosion. The second part of this study covers the time series analysis of precipitation, rainfall erosivity and wind erosion at the Marmara Region. Rainfall and runoff erosivity factors are defined by considering the results of field measurements at 10 stations. Climatological changing effects on rainfall erosion have been determined by monitoring meteorological variables. In the previous studies, Fournier Index is defined to estimate the rainfall erosivity for the study area. The Fournier Index or in other words a climatic index

  10. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Nusser, Adi [Physics Department and the Asher Space Science Institute-Technion, Haifa 32000 (Israel); Branchini, Enzo [Department of Physics, Universita Roma Tre, Via della Vasca Navale 84, 00146 Rome (Italy); Davis, Marc, E-mail: adi@physics.technion.ac.il, E-mail: branchin@fis.uniroma3.it, E-mail: mdavis@berkeley.edu [Departments of Astronomy and Physics, University of California, Berkeley, CA 94720 (United States)

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  11. Large-scale hydrogen production using nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Ryland, D.; Stolberg, L.; Kettner, A.; Gnanapragasam, N.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    For many years, Atomic Energy of Canada Limited (AECL) has been studying the feasibility of using nuclear reactors, such as the Supercritical Water-cooled Reactor, as an energy source for large scale hydrogen production processes such as High Temperature Steam Electrolysis and the Copper-Chlorine thermochemical cycle. Recent progress includes the augmentation of AECL's experimental capabilities by the construction of experimental systems to test high temperature steam electrolysis button cells at ambient pressure and temperatures up to 850{sup o}C and CuCl/HCl electrolysis cells at pressures up to 7 bar and temperatures up to 100{sup o}C. In parallel, detailed models of solid oxide electrolysis cells and the CuCl/HCl electrolysis cell are being refined and validated using experimental data. Process models are also under development to assess options for economic integration of these hydrogen production processes with nuclear reactors. Options for large-scale energy storage, including hydrogen storage, are also under study. (author)

  12. Environmental variables measured at multiple spatial scales exert uneven influence on fish assemblages of floodplain lakes

    Science.gov (United States)

    Dembkowski, Daniel J.; Miranda, Leandro E.

    2014-01-01

    We examined the interaction between environmental variables measured at three different scales (i.e., landscape, lake, and in-lake) and fish assemblage descriptors across a range of over 50 floodplain lakes in the Mississippi Alluvial Valley of Mississippi and Arkansas. Our goal was to identify important local- and landscape-level determinants of fish assemblage structure. Relationships between fish assemblage structure and variables measured at broader scales (i.e., landscape-level and lake-level) were hypothesized to be stronger than relationships with variables measured at finer scales (i.e., in-lake variables). Results suggest that fish assemblage structure in floodplain lakes was influenced by variables operating on three different scales. However, and contrary to expectations, canonical correlations between in-lake environmental characteristics and fish assemblage structure were generally stronger than correlations between landscape-level and lake-level variables and fish assemblage structure, suggesting a hierarchy of influence. From a resource management perspective, our study suggests that landscape-level and lake-level variables may be manipulated for conservation or restoration purposes, and in-lake variables and fish assemblage structure may be used to monitor the success of such efforts.

  13. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured ...

  14. Modified Principal Component Analysis for Identifying Key Environmental Indicators and Application to a Large-Scale Tidal Flat Reclamation

    Directory of Open Access Journals (Sweden)

    Kejian Chu

    2018-01-01

    Full Text Available Identification of the key environmental indicators (KEIs from a large number of environmental variables is important for environmental management in tidal flat reclamation areas. In this study, a modified principal component analysis approach (MPCA has been developed for determining the KEIs. The MPCA accounts for the two important attributes of the environmental variables: pollution status and temporal variation, in addition to the commonly considered numerical divergence attribute. It also incorporates the distance correlation (dCor to replace the Pearson’s correlation to measure the nonlinear interrelationship between the variables. The proposed method was applied to the Tiaozini sand shoal, a large-scale tidal flat reclamation region in China. Five KEIs were identified as dissolved inorganic nitrogen, Cd, petroleum in the water column, Hg, and total organic carbon in the sediment. The identified KEIs were shown to respond well to the biodiversity of phytoplankton. This demonstrated that the identified KEIs adequately represent the environmental condition in the coastal marine system. Therefore, the MPCA is a practicable method for extracting effective indicators that have key roles in the coastal and marine environment.

  15. A Topology Visualization Early Warning Distribution Algorithm for Large-Scale Network Security Incidents

    Directory of Open Access Journals (Sweden)

    Hui He

    2013-01-01

    Full Text Available It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system’s emergency response capabilities, alleviate the cyber attacks’ damage, and strengthen the system’s counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system’s plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks’ topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  16. No Large Scale Curvature Perturbations during Waterfall of Hybrid Inflation

    OpenAIRE

    Abolhasani, Ali Akbar; Firouzjahi, Hassan

    2010-01-01

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back-reactions to terminate inflation. If one considers only the clas...

  17. Large Scale Emerging Properties from Non Hamiltonian Complex Systems

    Directory of Open Access Journals (Sweden)

    Marco Bianucci

    2017-06-01

    Full Text Available The concept of “large scale” depends obviously on the phenomenon we are interested in. For example, in the field of foundation of Thermodynamics from microscopic dynamics, the spatial and time large scales are order of fraction of millimetres and microseconds, respectively, or lesser, and are defined in relation to the spatial and time scales of the microscopic systems. In large scale oceanography or global climate dynamics problems the time scales of interest are order of thousands of kilometres, for space, and many years for time, and are compared to the local and daily/monthly times scales of atmosphere and ocean dynamics. In all the cases a Zwanzig projection approach is, at least in principle, an effective tool to obtain class of universal smooth “large scale” dynamics for few degrees of freedom of interest, starting from the complex dynamics of the whole (usually many degrees of freedom system. The projection approach leads to a very complex calculus with differential operators, that is drastically simplified when the basic dynamics of the system of interest is Hamiltonian, as it happens in Foundation of Thermodynamics problems. However, in geophysical Fluid Dynamics, Biology, and in most of the physical problems the building block fundamental equations of motions have a non Hamiltonian structure. Thus, to continue to apply the useful projection approach also in these cases, we exploit the generalization of the Hamiltonian formalism given by the Lie algebra of dissipative differential operators. In this way, we are able to analytically deal with the series of the differential operators stemming from the projection approach applied to these general cases. Then we shall apply this formalism to obtain some relevant results concerning the statistical properties of the El Niño Southern Oscillation (ENSO.

  18. Large-scale water projects in the developing world: Revisiting the past and looking to the future

    Science.gov (United States)

    Sivakumar, Bellie; Chen, Ji

    2014-05-01

    During the past half a century or so, the developing world has been witnessing a significant increase in freshwater demands due to a combination of factors, including population growth, increased food demand, improved living standards, and water quality degradation. Since there exists significant variability in rainfall and river flow in both space and time, large-scale storage and distribution of water has become a key means to meet these increasing demands. In this regard, large dams and water transfer schemes (including river-linking schemes and virtual water trades) have been playing a key role. While the benefits of such large-scale projects in supplying water for domestic, irrigation, industrial, hydropower, recreational, and other uses both in the countries of their development and in other countries are undeniable, concerns on their negative impacts, such as high initial costs and damages to our ecosystems (e.g. river environment and species) and socio-economic fabric (e.g. relocation and socio-economic changes of affected people) have also been increasing in recent years. These have led to serious debates on the role of large-scale water projects in the developing world and on their future, but the often one-sided nature of such debates have inevitably failed to yield fruitful outcomes thus far. The present study aims to offer a far more balanced perspective on this issue. First, it recognizes and emphasizes the need for still additional large-scale water structures in the developing world in the future, due to the continuing increase in water demands, inefficiency in water use (especially in the agricultural sector), and absence of equivalent and reliable alternatives. Next, it reviews a few important success and failure stories of large-scale water projects in the developing world (and in the developed world), in an effort to arrive at a balanced view on the future role of such projects. Then, it discusses some major challenges in future water planning

  19. A new system of labour management in African large-scale agriculture?

    DEFF Research Database (Denmark)

    Gibbon, Peter; Riisgaard, Lone

    2014-01-01

    This paper applies a convention theory (CT) approach to the analysis of labour management systems in African large-scale farming. The reconstruction of previous analyses of high-value crop production on large-scale farms in Africa in terms of CT suggests that, since 1980–95, labour management has...

  20. Pseudoscalar-photon mixing and the large scale alignment of QsO ...

    Indian Academy of Sciences (India)

    physics pp. 679-682. Pseudoscalar-photon mixing and the large scale alignment of QsO optical polarizations. PANKAJ JAIN, sUKANTA PANDA and s sARALA. Physics Department, Indian Institute of Technology, Kanpur 208 016, India. Abstract. We review the observation of large scale alignment of QSO optical polariza-.

  1. On the universal character of the large scale structure of the universe

    International Nuclear Information System (INIS)

    Demianski, M.; International Center for Relativistic Astrophysics; Rome Univ.; Doroshkevich, A.G.

    1991-01-01

    We review different theories of formation of the large scale structure of the Universe. Special emphasis is put on the theory of inertial instability. We show that for a large class of initial spectra the resulting two point correlation functions are similar. We discuss also the adhesion theory which uses the Burgers equation, Navier-Stokes equation or coagulation process. We review the Zeldovich theory of gravitational instability and discuss the internal structure of pancakes. Finally we discuss the role of the velocity potential in determining the global characteristics of large scale structures (distribution of caustics, scale of voids, etc.). In the last chapter we list the main unsolved problems and main successes of the theory of formation of large scale structure. (orig.)

  2. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    Science.gov (United States)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  3. Tropical intraseasonal rainfall variability in the CFSR

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jiande [I.M. System Group Inc. at NOAA/NCEP/EMC, Camp Springs, MD (United States); Wang, Wanqiu [NOAA/NCEP/CPC, Camp Springs, MD (United States); Fu, Xiouhua [University of Hawaii at Manoa, IPRC, SOEST, Honolulu, HI (United States); Seo, Kyong-Hwan [Pusan National University, Department of Atmospheric Sciences, Busan (Korea, Republic of)

    2012-06-15

    While large-scale circulation fields from atmospheric reanalyses have been widely used to study the tropical intraseasonal variability, rainfall variations from the reanalyses are less focused. Because of the sparseness of in situ observations available in the tropics and strong coupling between convection and large-scale circulation, the accuracy of tropical rainfall from the reanalyses not only measures the quality of reanalysis rainfall but is also to some extent indicative of the accuracy of the circulations fields. This study analyzes tropical intraseasonal rainfall variability in the recently completed NCEP Climate Forecast System Reanalysis (CFSR) and its comparison with the widely used NCEP/NCAR reanalysis (R1) and NCEP/DOE reanalysis (R2). The R1 produces too weak rainfall variability while the R2 generates too strong westward propagation. Compared with the R1 and R2, the CFSR produces greatly improved tropical intraseasonal rainfall variability with the dominance of eastward propagation and more realistic amplitude. An analysis of the relationship between rainfall and large-scale fields using composites based on Madden-Julian Oscillation (MJO) events shows that, in all three NCEP reanalyses, the moisture convergence leading the rainfall maximum is near the surface in the western Pacific but is above 925 hPa in the eastern Indian Ocean. However, the CFSR produces the strongest large-scale convergence and the rainfall from CFSR lags the column integrated precipitable water by 1 or 2 days while R1 and R2 rainfall tends to lead the respective precipitable water. Diabatic heating related to the MJO variability in the CFSR is analyzed and compared with that derived from large-scale fields. It is found that the amplitude of CFSR-produced total heating anomalies is smaller than that of the derived. Rainfall variability from the other two recently produced reanalyses, the ECMWF Re-Analysis Interim (ERAI), and the Modern Era Retrospective-analysis for Research and

  4. LAVA: Large scale Automated Vulnerability Addition

    Science.gov (United States)

    2016-05-23

    LAVA: Large-scale Automated Vulnerability Addition Brendan Dolan -Gavitt∗, Patrick Hulin†, Tim Leek†, Fredrich Ulrich†, Ryan Whelan† (Authors listed...released, and thus rapidly become stale. We can expect tools to have been trained to detect bugs that have been released. Given the commercial price tag...low TCN) and dead (low liveness) program data is a powerful one for vulnera- bility injection. The DUAs it identifies are internal program quantities

  5. Large-Scale Transit Signal Priority Implementation

    OpenAIRE

    Lee, Kevin S.; Lozner, Bailey

    2018-01-01

    In 2016, the District Department of Transportation (DDOT) deployed Transit Signal Priority (TSP) at 195 intersections in highly urbanized areas of Washington, DC. In collaboration with a broader regional implementation, and in partnership with the Washington Metropolitan Area Transit Authority (WMATA), DDOT set out to apply a systems engineering–driven process to identify, design, test, and accept a large-scale TSP system. This presentation will highlight project successes and lessons learned.

  6. Probing cosmology with the homogeneity scale of the Universe through large scale structure surveys

    International Nuclear Information System (INIS)

    Ntelis, Pierros

    2017-01-01

    This thesis exposes my contribution to the measurement of homogeneity scale using galaxies, with the cosmological interpretation of results. In physics, any model is characterized by a set of principles. Most models in cosmology are based on the Cosmological Principle, which states that the universe is statistically homogeneous and isotropic on a large scales. Today, this principle is considered to be true since it is respected by those cosmological models that accurately describe the observations. However, while the isotropy of the universe is now confirmed by many experiments, it is not the case for the homogeneity. To study cosmic homogeneity, we propose to not only test a model but to test directly one of the postulates of modern cosmology. Since 1998 the measurements of cosmic distances using type Ia supernovae, we know that the universe is now in a phase of accelerated expansion. This phenomenon can be explained by the addition of an unknown energy component, which is called dark energy. Since dark energy is responsible for the expansion of the universe, we can study this mysterious fluid by measuring the rate of expansion of the universe. The universe has imprinted in its matter distribution a standard ruler, the Baryon Acoustic Oscillation (BAO) scale. By measuring this scale at different times during the evolution of our universe, it is then possible to measure the rate of expansion of the universe and thus characterize this dark energy. Alternatively, we can use the homogeneity scale to study this dark energy. Studying the homogeneity and the BAO scale requires the statistical study of the matter distribution of the universe at large scales, superior to tens of Mega-parsecs. Galaxies and quasars are formed in the vast over densities of matter and they are very luminous: these sources trace the distribution of matter. By measuring the emission spectra of these sources using large spectroscopic surveys, such as BOSS and eBOSS, we can measure their positions

  7. Development of large scale wind energy conservation system. Development of large scale wind energy conversion system; Ogata furyoku hatsuden system no kaihatsu. Ogata furyoku hatsuden system no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Takita, M [New Energy and Industrial Technology Development Organization, Tokyo (Japan)

    1994-12-01

    Described herein are the results of the FY1994 research program for development of large scale wind energy conversion system. The study on technological development of key components evaluates performance of, and confirms reliability and applicability of, hydraulic systems centered by those equipped with variable pitch mechanisms and electrohydraulic servo valves that control them. The study on blade conducts fatigue and crack-propagation tests, which show that the blades developed have high strength. The study on speed-increasing gear conducts load tests, confirming the effects of reducing vibration and noise by modification of the gear teeth. The study on NACELLE cover conducts vibration tests to confirm its vibration characteristics, and analyzes three-dimensional vibration by the finite element method. Some components for a 500kW commercial wind mill are fabricated, including rotor heads, variable pitch mechanisms, speed-increasing gears, YAW systems, and hydraulic control systems. The others fabricated include a remote supervisory control system for maintenance, system to integrate the wind mill into a power system, and electrical control devices in which site conditions, such as atmospheric temperature and lightening, are taken into consideration.

  8. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  9. Response of deep and shallow tropical maritime cumuli to large-scale processes

    Science.gov (United States)

    Yanai, M.; Chu, J.-H.; Stark, T. E.; Nitta, T.

    1976-01-01

    The bulk diagnostic method of Yanai et al. (1973) and a simplified version of the spectral diagnostic method of Nitta (1975) are used for a more quantitative evaluation of the response of various types of cumuliform clouds to large-scale processes, using the same data set in the Marshall Islands area for a 100-day period in 1956. The dependence of the cloud mass flux distribution on radiative cooling, large-scale vertical motion, and evaporation from the sea is examined. It is shown that typical radiative cooling rates in the tropics tend to produce a bimodal distribution of mass spectrum exhibiting deep and shallow clouds. The bimodal distribution is further enhanced when the large-scale vertical motion is upward, and a nearly unimodal distribution of shallow clouds prevails when the relative cooling is compensated by the heating due to the large-scale subsidence. Both deep and shallow clouds are modulated by large-scale disturbances. The primary role of surface evaporation is to maintain the moisture flux at the cloud base.

  10. Accuracy assessment of planimetric large-scale map data for decision-making

    Directory of Open Access Journals (Sweden)

    Doskocz Adam

    2016-06-01

    Full Text Available This paper presents decision-making risk estimation based on planimetric large-scale map data, which are data sets or databases which are useful for creating planimetric maps on scales of 1:5,000 or larger. The studies were conducted on four data sets of large-scale map data. Errors of map data were used for a risk assessment of decision-making about the localization of objects, e.g. for land-use planning in realization of investments. An analysis was performed for a large statistical sample set of shift vectors of control points, which were identified with the position errors of these points (errors of map data.

  11. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  12. Large-scale Flow and Transport of Magnetic Flux in the Solar ...

    Indian Academy of Sciences (India)

    tribpo

    Abstract. Horizontal large-scale velocity field describes horizontal displacement of the photospheric magnetic flux in zonal and meridian directions. The flow systems of solar plasma, constructed according to the velocity field, create the large-scale cellular-like patterns with up-flow in the center and the down-flow on the ...

  13. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  14. Scaling prediction errors to reward variability benefits error-driven learning in humans.

    Science.gov (United States)

    Diederen, Kelly M J; Schultz, Wolfram

    2015-09-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease "adapters'" accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. Copyright © 2015 the American Physiological Society.

  15. Leaf optical properties shed light on foliar trait variability at individual to global scales

    Science.gov (United States)

    Shiklomanov, A. N.; Serbin, S.; Dietze, M.

    2016-12-01

    Recent syntheses of large trait databases have contributed immensely to our understanding of drivers of plant function at the global scale. However, the global trade-offs revealed by such syntheses, such as the trade-off between leaf productivity and resilience (i.e. "leaf economics spectrum"), are often absent at smaller scales and fail to correlate with actual functional limitations. An improved understanding of how traits vary within communities, species, and individuals is critical to accurate representations of vegetation ecophysiology and ecological dynamics in ecosystem models. Spectral data from both field observations and remote sensing platforms present a potentially rich and widely available source of information on plant traits. In particular, the inversion of physically-based radiative transfer models (RTMs) is an effective and general method for estimating plant traits from spectral measurements. Here, we apply Bayesian inversion of the PROSPECT leaf RTM to a large database of field spectra and plant traits spanning tropical, temperate, and boreal forests, agricultural plots, arid shrublands, and tundra to identify dominant sources of variability and characterize trade-offs in plant functional traits. By leveraging such a large and diverse dataset, we re-calibrate the empirical absorption coefficients underlying the PROSPECT model and expand its scope to include additional leaf biochemical components, namely leaf nitrogen content. Our work provides a key methodological contribution as a physically-based retrieval of leaf nitrogen from remote sensing observations, and provides substantial insights about trait trade-offs related to plant acclimation, adaptation, and community assembly.

  16. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  17. Facile Large-scale synthesis of stable CuO nanoparticles

    Science.gov (United States)

    Nazari, P.; Abdollahi-Nejand, B.; Eskandari, M.; Kohnehpoushi, S.

    2018-04-01

    In this work, a novel approach in synthesizing the CuO nanoparticles was introduced. A sequential corrosion and detaching was proposed in the growth and dispersion of CuO nanoparticles in the optimum pH value of eight. The produced CuO nanoparticles showed six nm (±2 nm) in diameter and spherical feather with a high crystallinity and uniformity in size. In this method, a large-scale production of CuO nanoparticles (120 grams in an experimental batch) from Cu micro-particles was achieved which may met the market criteria for large-scale production of CuO nanoparticles.

  18. Large-scale runoff generation – parsimonious parameterisation using high-resolution topography

    OpenAIRE

    L. Gong; S. Halldin; C.-Y. Xu

    2010-01-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms...

  19. Large-Scale Cooperative Task Distribution on Peer-to-Peer Networks

    Science.gov (United States)

    2012-01-01

    SUBTITLE Large-scale cooperative task distribution on peer-to-peer networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...disadvantages of ML- Chord are its fixed size (two layers), and limited scala - bility for large-scale systems. RC-Chord extends ML- D. Karrels et al...configurable before runtime. This can be improved by incorporating a distributed learning algorithm to tune the number and range of the DLoE tracking

  20. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  1. Modeling temporal and large-scale spatial variability of soil respiration from soil water availability, temperature and vegetation productivity indices

    Science.gov (United States)

    Reichstein, Markus; Rey, Ana; Freibauer, Annette; Tenhunen, John; Valentini, Riccardo; Banza, Joao; Casals, Pere; Cheng, Yufu; Grünzweig, Jose M.; Irvine, James; Joffre, Richard; Law, Beverly E.; Loustau, Denis; Miglietta, Franco; Oechel, Walter; Ourcival, Jean-Marc; Pereira, Joao S.; Peressotti, Alessandro; Ponti, Francesca; Qi, Ye; Rambal, Serge; Rayment, Mark; Romanya, Joan; Rossi, Federica; Tedeschi, Vanessa; Tirone, Giampiero; Xu, Ming; Yakir, Dan

    2003-12-01

    explain some of the month-to-month variability of soil respiration, it failed to capture the intersite variability, regardless of whether the original or a new optimized model parameterization was used. In both cases, the residuals were strongly related to maximum site leaf area index. Thus, for a monthly timescale, we developed a simple T&P&LAI model that includes leaf area index as an additional predictor of soil respiration. This extended but still simple model performed nearly as well as the more detailed time step model and explained 50% of the overall and 65% of the site-to-site variability. Consequently, better estimates of globally distributed soil respiration should be obtained with the new model driven by satellite estimates of leaf area index. Before application at the continental or global scale, this approach should be further tested in boreal, cold-temperate, and tropical biomes as well as for non-woody vegetation.

  2. Puzzles of large scale structure and gravitation

    International Nuclear Information System (INIS)

    Sidharth, B.G.

    2006-01-01

    We consider the puzzle of cosmic voids bounded by two-dimensional structures of galactic clusters as also a puzzle pointed out by Weinberg: How can the mass of a typical elementary particle depend on a cosmic parameter like the Hubble constant? An answer to the first puzzle is proposed in terms of 'Scaled' Quantum Mechanical like behaviour which appears at large scales. The second puzzle can be answered by showing that the gravitational mass of an elementary particle has a Machian character (see Ahmed N. Cantorian small worked, Mach's principle and the universal mass network. Chaos, Solitons and Fractals 2004;21(4))

  3. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  4. Stability of large scale interconnected dynamical systems

    International Nuclear Information System (INIS)

    Akpan, E.P.

    1993-07-01

    Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs

  5. Variable choices of scaling in the homogenization of a Nernst-Planck-Poisson problem

    NARCIS (Netherlands)

    Ray, N.; Eck, C.; Muntean, A.; Knabner, P.

    2011-01-01

    We perform the periodic homogenization (i. e. e ¿ 0) of the non-stationary Nernst-Planck-Poisson system using two-scale convergence, where e is a suitable scale parameter. The objective is to investigate the influence of variable choices of scaling in e of the microscopic system of partial

  6. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  7. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  8. On the Renormalization of the Effective Field Theory of Large Scale Structures

    OpenAIRE

    Pajer, Enrico; Zaldarriaga, Matias

    2013-01-01

    Standard perturbation theory (SPT) for large-scale matter inhomogeneities is unsatisfactory for at least three reasons: there is no clear expansion parameter since the density contrast is not small on all scales; it does not fully account for deviations at large scales from a perfect pressureless fluid induced by short-scale non-linearities; for generic initial conditions, loop corrections are UV-divergent, making predictions cutoff dependent and hence unphysical. The Effective Field Theory o...

  9. Quantitative Missense Variant Effect Prediction Using Large-Scale Mutagenesis Data.

    Science.gov (United States)

    Gray, Vanessa E; Hause, Ronald J; Luebeck, Jens; Shendure, Jay; Fowler, Douglas M

    2018-01-24

    Large datasets describing the quantitative effects of mutations on protein function are becoming increasingly available. Here, we leverage these datasets to develop Envision, which predicts the magnitude of a missense variant's molecular effect. Envision combines 21,026 variant effect measurements from nine large-scale experimental mutagenesis datasets, a hitherto untapped training resource, with a supervised, stochastic gradient boosting learning algorithm. Envision outperforms other missense variant effect predictors both on large-scale mutagenesis data and on an independent test dataset comprising 2,312 TP53 variants whose effects were measured using a low-throughput approach. This dataset was never used for hyperparameter tuning or model training and thus serves as an independent validation set. Envision prediction accuracy is also more consistent across amino acids than other predictors. Finally, we demonstrate that Envision's performance improves as more large-scale mutagenesis data are incorporated. We precompute Envision predictions for every possible single amino acid variant in human, mouse, frog, zebrafish, fruit fly, worm, and yeast proteomes (https://envision.gs.washington.edu/). Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  11. Generation of large-scale vorticity in rotating stratified turbulence with inhomogeneous helicity: mean-field theory

    Science.gov (United States)

    Kleeorin, N.

    2018-06-01

    We discuss a mean-field theory of the generation of large-scale vorticity in a rotating density stratified developed turbulence with inhomogeneous kinetic helicity. We show that the large-scale non-uniform flow is produced due to either a combined action of a density stratified rotating turbulence and uniform kinetic helicity or a combined effect of a rotating incompressible turbulence and inhomogeneous kinetic helicity. These effects result in the formation of a large-scale shear, and in turn its interaction with the small-scale turbulence causes an excitation of the large-scale instability (known as a vorticity dynamo) due to a combined effect of the large-scale shear and Reynolds stress-induced generation of the mean vorticity. The latter is due to the effect of large-scale shear on the Reynolds stress. A fast rotation suppresses this large-scale instability.

  12. Recent Advances in Understanding Large Scale Vapour Explosions

    International Nuclear Information System (INIS)

    Board, S.J.; Hall, R.W.

    1976-01-01

    In foundries, violent explosions occur occasionally when molten metal comes into contact with water. If similar explosions can occur with other materials, hazardous situations may arise for example in LNG marine transportation accidents, or in liquid cooled reactor incidents when molten UO 2 contacts water or sodium coolant. Over the last 10 years a large body of experimental data has been obtained on the behaviour of small quantities of hot material in contact with a vaporisable coolant. Such experiments generally give low energy yields, despite producing fine fragmentation of the molten material. These events have been interpreted in terms of a wide range of phenomena such as violent boiling, liquid entrainment, bubble collapse, superheat, surface cracking and many others. Many of these studies have been aimed at understanding the small scale behaviour of the particular materials of interest. However, understanding the nature of the energetic events which were the original cause for concern may also be necessary to give confidence that violent events cannot occur for these materials in large scale situations. More recently, there has been a trend towards larger experiments and some of these have produced explosions of moderately high efficiency. Although occurrence of such large scale explosions can depend rather critically on initial conditions in a way which is not fully understood, there are signs that the interpretation of these events may be more straightforward than that of the single drop experiments. In the last two years several theoretical models for large scale explosions have appeared which attempt a self contained explanation of at least some stages of such high yield events: these have as their common feature a description of how a propagating breakdown of an initially quasi-stable distribution of materials is induced by the pressure and flow field caused by the energy release in adjacent regions. These models have led to the idea that for a full

  13. Robust large-scale parallel nonlinear solvers for simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any

  14. Large Scale GW Calculations on the Cori System

    Science.gov (United States)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  15. Cosmic ray acceleration by large scale galactic shocks

    International Nuclear Information System (INIS)

    Cesarsky, C.J.; Lagage, P.O.

    1987-01-01

    The mechanism of diffusive shock acceleration may account for the existence of galactic cosmic rays detailed application to stellar wind shocks and especially to supernova shocks have been developed. Existing models can usually deal with the energetics or the spectral slope, but the observed energy range of cosmic rays is not explained. Therefore it seems worthwhile to examine the effect that large scale, long-lived galactic shocks may have on galactic cosmic rays, in the frame of the diffusive shock acceleration mechanism. Large scale fast shocks can only be expected to exist in the galactic halo. We consider three situations where they may arise: expansion of a supernova shock in the halo, galactic wind, galactic infall; and discuss the possible existence of these shocks and their role in accelerating cosmic rays

  16. Lagrangian space consistency relation for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-01-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space

  17. Electron drift in a large scale solid xenon

    International Nuclear Information System (INIS)

    Yoo, J.; Jaskierny, W.F.

    2015-01-01

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Therefore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon

  18. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  19. Evidence for large temperature fluctuations in quasar accretion disks from spectral variability

    Energy Technology Data Exchange (ETDEWEB)

    Ruan, John J.; Anderson, Scott F.; Agol, Eric [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Dexter, Jason, E-mail: jruan@astro.washington.edu [Departments of Physics and Astronomy, University of California, Berkeley, CA 94720 (United States)

    2014-03-10

    The well-known bluer-when-brighter trend observed in quasar variability is a signature of the complex processes in the accretion disk and can be a probe of the quasar variability mechanism. Using a sample of 604 variable quasars with repeat spectra in the Sloan Digital Sky Survey-I/II (SDSS), we construct difference spectra to investigate the physical causes of this bluer-when-brighter trend. The continuum of our composite difference spectrum is well fit by a power law, with a spectral index in excellent agreement with previous results. We measure the spectral variability relative to the underlying spectra of the quasars, which is independent of any extinction, and compare to model predictions. We show that our SDSS spectral variability results cannot be produced by global accretion rate fluctuations in a thin disk alone. However, we find that a simple model of an inhomogeneous disk with localized temperature fluctuations will produce power-law spectral variability over optical wavelengths. We show that the inhomogeneous disk will provide good fits to our observed spectral variability if the disk has large temperature fluctuations in many independently varying zones, in excellent agreement with independent constraints from quasar microlensing disk sizes, their strong UV spectral continuum, and single-band variability amplitudes. Our results provide an independent constraint on quasar variability models and add to the mounting evidence that quasar accretion disks have large localized temperature fluctuations.

  20. Energy efficiency supervision strategy selection of Chinese large-scale public buildings

    International Nuclear Information System (INIS)

    Jin Zhenxing; Wu Yong; Li Baizhan; Gao Yafeng

    2009-01-01

    This paper discusses energy consumption, building development and building energy consumption in China, and points that energy efficiency management and maintenance of large-scale public buildings is the breakthrough point of building energy saving in China. Three obstacles are lack of basic statistics data, lack of service market for building energy saving, and lack of effective management measures account for the necessity of energy efficiency supervision for large-scale public buildings. And then the paper introduces the supervision aims, the supervision system and the five basic systems' role in the supervision system, and analyzes the working mechanism of the five basic systems. The energy efficiency supervision system of large-scale public buildings takes energy consumption statistics as a data basis, Energy auditing as a technical support, energy consumption ration as a benchmark of energy saving and price increase beyond ration as a price lever, and energy efficiency public-noticing as an amplifier. The supervision system promotes energy efficiency operation and maintenance of large-scale public building, and drives a comprehensive building energy saving in China.

  1. Energy efficiency supervision strategy selection of Chinese large-scale public buildings

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zhenxing; Li, Baizhan; Gao, Yafeng [The Faculty of Urban Construction and Environmental Engineering, Chongqing University, Chongqing (China); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment, Ministry of Education, Chongqing 400045 (China); Wu, Yong [The Department of Science and Technology, Ministry of Construction, Beijing 100835 (China)

    2009-06-15

    This paper discusses energy consumption, building development and building energy consumption in China, and points that energy efficiency management and maintenance of large-scale public buildings is the breakthrough point of building energy saving in China. Three obstacles are lack of basic statistics data, lack of service market for building energy saving, and lack of effective management measures account for the necessity of energy efficiency supervision for large-scale public buildings. And then the paper introduces the supervision aims, the supervision system and the five basic systems' role in the supervision system, and analyzes the working mechanism of the five basic systems. The energy efficiency supervision system of large-scale public buildings takes energy consumption statistics as a data basis, Energy auditing as a technical support, energy consumption ration as a benchmark of energy saving and price increase beyond ration as a price lever, and energy efficiency public-noticing as an amplifier. The supervision system promotes energy efficiency operation and maintenance of large-scale public building, and drives a comprehensive building energy saving in China. (author)

  2. Energy efficiency supervision strategy selection of Chinese large-scale public buildings

    Energy Technology Data Exchange (ETDEWEB)

    Jin Zhenxing [Faculty of Urban Construction and Environmental Engineering, Chongqing University, Chongqing (China); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment, Ministry of Education, Chongqing 400045 (China)], E-mail: jinzhenxing33@sina.com; Wu Yong [Department of Science and Technology, Ministry of Construction, Beijing 100835 (China); Li Baizhan; Gao Yafeng [Faculty of Urban Construction and Environmental Engineering, Chongqing University, Chongqing (China); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment, Ministry of Education, Chongqing 400045 (China)

    2009-06-15

    This paper discusses energy consumption, building development and building energy consumption in China, and points that energy efficiency management and maintenance of large-scale public buildings is the breakthrough point of building energy saving in China. Three obstacles are lack of basic statistics data, lack of service market for building energy saving, and lack of effective management measures account for the necessity of energy efficiency supervision for large-scale public buildings. And then the paper introduces the supervision aims, the supervision system and the five basic systems' role in the supervision system, and analyzes the working mechanism of the five basic systems. The energy efficiency supervision system of large-scale public buildings takes energy consumption statistics as a data basis, Energy auditing as a technical support, energy consumption ration as a benchmark of energy saving and price increase beyond ration as a price lever, and energy efficiency public-noticing as an amplifier. The supervision system promotes energy efficiency operation and maintenance of large-scale public building, and drives a comprehensive building energy saving in China.

  3. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  4. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  5. Detection of large-scale concentric gravity waves from a Chinese airglow imager network

    Science.gov (United States)

    Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao

    2018-06-01

    Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.

  6. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... Security can help increase accountability for large-scale land acquisitions in ... to build decent economic livelihoods and participate meaningfully in decisions ... its 2017 call for proposals to establish Cyber Policy Centres in the Global South.

  7. The Cosmology Large Angular Scale Surveyor (CLASS)

    Science.gov (United States)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  8. Measuring the topology of large-scale structure in the universe

    Science.gov (United States)

    Gott, J. Richard, III

    1988-11-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  9. Measuring the topology of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Gott, J.R. III

    1988-01-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data. 45 references

  10. How the Internet Will Help Large-Scale Assessment Reinvent Itself

    Directory of Open Access Journals (Sweden)

    Randy Elliot Bennett

    2001-02-01

    Full Text Available Large-scale assessment in the United States is undergoing enormous pressure to change. That pressure stems from many causes. Depending upon the type of test, the issues precipitating change include an outmoded cognitive-scientific basis for test design; a mismatch with curriculum; the differential performance of population groups; a lack of information to help individuals improve; and inefficiency. These issues provide a strong motivation to reconceptualize both the substance and the business of large-scale assessment. At the same time, advances in technology, measurement, and cognitive science are providing the means to make that reconceptualization a reality. The thesis of this paper is that the largest facilitating factor will be technological, in particular the Internet. In the same way that it is already helping to revolutionize commerce, education, and even social interaction, the Internet will help revolutionize the business and substance of large-scale assessment.

  11. Soil fertility dynamics in a semiarid basin: impact of scale level in weighing the effect of the landscape variables

    International Nuclear Information System (INIS)

    Ruiz-Navarro, A.; Barbera, G. G.; Albaladejo, J.

    2009-01-01

    Arid and semi-arid Mediterranean soils are particularly sensitive to degradation processes, and soil fertility could play important role in restoration/conservation practices. Our objective was to study the relationships between soil and landscape at different scales in order to understand the main drivers of soil fertility on a semiarid catchment. A stratified sampling plan was carried out to take soil and landscape representative variability. Multivariate statistic techniques were used to elucidate the relationship between both. The results showed that soil fertility are positively related with density of vegetation and topographical conditions favourable to soil moisture at small scale, while negatively with topographical factors that contributed erosion dynamic on ero debility lithologies at medium and large scale. (Author) 8 refs.

  12. An exploratory, large-scale study of pain and quality of life outcomes in cancer patients with moderate or severe pain, and variables predicting improvement.

    Science.gov (United States)

    Maximiano, Constanza; López, Iker; Martín, Cristina; Zugazabeitia, Luis; Martí-Ciriquián, Juan L; Núñez, Miguel A; Contreras, Jorge; Herdman, Michael; Traseira, Susana; Provencio, Mariano

    2018-01-01

    There have been few large-scale, real world studies in Spain to assess change in pain and quality of life (QOL) outcomes in cancer patients with moderate to severe pain. This study aimed to assess changes on both outcomes after 3 months of usual care and to investigate factors associated with change in QoL. Large, multi-centre, observational study in patients with lung, head and neck, colorectal or breast cancer experiencing a first episode of moderate to severe pain while attending one of the participating centres. QoL was assessed using the EuroQol-5D questionnaire and pain using the Brief Pain Inventory (BPI). Instruments were administered at baseline and after 3 months of follow up. Multivariate analyses were used to assess the impact of treatment factors, demographic and clinical variables, pain and other symptoms on QoL scores. 1711 patients were included for analysis. After 3 months of usual care, a significant improvement was observed in pain and QoL in all four cancer groups (pbreast cancer patients showed the largest gains. Poorer baseline performance status (ECOG) and the presence of anxiety/depression were associated with significantly poorer QOL outcomes. Improvements in BPI pain scores were associated with improved QoL. In the four cancer types studied, pain and QoL outcomes improved considerably after 3 months of usual care. Improvements in pain made a substantial contribution to QoL gains whilst the presence of anxiety and depression and poor baseline performance status significantly constrained improvement.

  13. Contribution of large scale coherence to wind turbine power: A large eddy simulation study in periodic wind farms

    Science.gov (United States)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2018-03-01

    Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.

  14. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  15. Large Scale Anthropogenic Reduction of Forest Cover in Last Glacial Maximum Europe.

    Science.gov (United States)

    Kaplan, Jed O; Pfeiffer, Mirjam; Kolen, Jan C A; Davis, Basil A S

    2016-01-01

    Reconstructions of the vegetation of Europe during the Last Glacial Maximum (LGM) are an enigma. Pollen-based analyses have suggested that Europe was largely covered by steppe and tundra, and forests persisted only in small refugia. Climate-vegetation model simulations on the other hand have consistently suggested that broad areas of Europe would have been suitable for forest, even in the depths of the last glaciation. Here we reconcile models with data by demonstrating that the highly mobile groups of hunter-gatherers that inhabited Europe at the LGM could have substantially reduced forest cover through the ignition of wildfires. Similar to hunter-gatherers of the more recent past, Upper Paleolithic humans were masters of the use of fire, and preferred inhabiting semi-open landscapes to facilitate foraging, hunting and travel. Incorporating human agency into a dynamic vegetation-fire model and simulating forest cover shows that even small increases in wildfire frequency over natural background levels resulted in large changes in the forested area of Europe, in part because trees were already stressed by low atmospheric CO2 concentrations and the cold, dry, and highly variable climate. Our results suggest that the impact of humans on the glacial landscape of Europe may be one of the earliest large-scale anthropogenic modifications of the earth system.

  16. Effects of climate variability and accelerated forest thinning on watershed-scale runoff in southwestern USA ponderosa pine forests.

    Directory of Open Access Journals (Sweden)

    Marcos D Robles

    Full Text Available The recent mortality of up to 20% of forests and woodlands in the southwestern United States, along with declining stream flows and projected future water shortages, heightens the need to understand how management practices can enhance forest resilience and functioning under unprecedented scales of drought and wildfire. To address this challenge, a combination of mechanical thinning and fire treatments are planned for 238,000 hectares (588,000 acres of ponderosa pine (Pinus ponderosa forests across central Arizona, USA. Mechanical thinning can increase runoff at fine scales, as well as reduce fire risk and tree water stress during drought, but the effects of this practice have not been studied at scales commensurate with recent forest disturbances or under a highly variable climate. Modifying a historical runoff model, we constructed scenarios to estimate increases in runoff from thinning ponderosa pine at the landscape and watershed scales based on driving variables: pace, extent and intensity of forest treatments and variability in winter precipitation. We found that runoff on thinned forests was about 20% greater than unthinned forests, regardless of whether treatments occurred in a drought or pluvial period. The magnitude of this increase is similar to observed declines in snowpack for the region, suggesting that accelerated thinning may lessen runoff losses due to warming effects. Gains in runoff were temporary (six years after treatment and modest when compared to mean annual runoff from the study watersheds (0-3%. Nonetheless gains observed during drought periods could play a role in augmenting river flows on a seasonal basis, improving conditions for water-dependent natural resources, as well as benefit water supplies for downstream communities. Results of this study and others suggest that accelerated forest thinning at large scales could improve the water balance and resilience of forests and sustain the ecosystem services they provide.

  17. Effects of climate variability and accelerated forest thinning on watershed-scale runoff in southwestern USA ponderosa pine forests.

    Science.gov (United States)

    Robles, Marcos D; Marshall, Robert M; O'Donnell, Frances; Smith, Edward B; Haney, Jeanmarie A; Gori, David F

    2014-01-01

    The recent mortality of up to 20% of forests and woodlands in the southwestern United States, along with declining stream flows and projected future water shortages, heightens the need to understand how management practices can enhance forest resilience and functioning under unprecedented scales of drought and wildfire. To address this challenge, a combination of mechanical thinning and fire treatments are planned for 238,000 hectares (588,000 acres) of ponderosa pine (Pinus ponderosa) forests across central Arizona, USA. Mechanical thinning can increase runoff at fine scales, as well as reduce fire risk and tree water stress during drought, but the effects of this practice have not been studied at scales commensurate with recent forest disturbances or under a highly variable climate. Modifying a historical runoff model, we constructed scenarios to estimate increases in runoff from thinning ponderosa pine at the landscape and watershed scales based on driving variables: pace, extent and intensity of forest treatments and variability in winter precipitation. We found that runoff on thinned forests was about 20% greater than unthinned forests, regardless of whether treatments occurred in a drought or pluvial period. The magnitude of this increase is similar to observed declines in snowpack for the region, suggesting that accelerated thinning may lessen runoff losses due to warming effects. Gains in runoff were temporary (six years after treatment) and modest when compared to mean annual runoff from the study watersheds (0-3%). Nonetheless gains observed during drought periods could play a role in augmenting river flows on a seasonal basis, improving conditions for water-dependent natural resources, as well as benefit water supplies for downstream communities. Results of this study and others suggest that accelerated forest thinning at large scales could improve the water balance and resilience of forests and sustain the ecosystem services they provide.

  18. Effects of Climate Variability and Accelerated Forest Thinning on Watershed-Scale Runoff in Southwestern USA Ponderosa Pine Forests

    Science.gov (United States)

    Robles, Marcos D.; Marshall, Robert M.; O'Donnell, Frances; Smith, Edward B.; Haney, Jeanmarie A.; Gori, David F.

    2014-01-01

    The recent mortality of up to 20% of forests and woodlands in the southwestern United States, along with declining stream flows and projected future water shortages, heightens the need to understand how management practices can enhance forest resilience and functioning under unprecedented scales of drought and wildfire. To address this challenge, a combination of mechanical thinning and fire treatments are planned for 238,000 hectares (588,000 acres) of ponderosa pine (Pinus ponderosa) forests across central Arizona, USA. Mechanical thinning can increase runoff at fine scales, as well as reduce fire risk and tree water stress during drought, but the effects of this practice have not been studied at scales commensurate with recent forest disturbances or under a highly variable climate. Modifying a historical runoff model, we constructed scenarios to estimate increases in runoff from thinning ponderosa pine at the landscape and watershed scales based on driving variables: pace, extent and intensity of forest treatments and variability in winter precipitation. We found that runoff on thinned forests was about 20% greater than unthinned forests, regardless of whether treatments occurred in a drought or pluvial period. The magnitude of this increase is similar to observed declines in snowpack for the region, suggesting that accelerated thinning may lessen runoff losses due to warming effects. Gains in runoff were temporary (six years after treatment) and modest when compared to mean annual runoff from the study watersheds (0–3%). Nonetheless gains observed during drought periods could play a role in augmenting river flows on a seasonal basis, improving conditions for water-dependent natural resources, as well as benefit water supplies for downstream communities. Results of this study and others suggest that accelerated forest thinning at large scales could improve the water balance and resilience of forests and sustain the ecosystem services they provide. PMID

  19. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    fuel/energy, climate, and finance has occurred and one of the most ... this wave of large-scale land acquisitions. In fact, esti- ... Environmental Rights Action/Friends of the Earth,. Nigeria ... map the differentiated impacts (gender, ethnicity,.

  20. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  1. Scale-dependent spatial variability in peatland lead pollution in the southern Pennines, UK.

    Science.gov (United States)

    Rothwell, James J; Evans, Martin G; Lindsay, John B; Allott, Timothy E H

    2007-01-01

    Increasingly, within-site and regional comparisons of peatland lead pollution have been undertaken using the inventory approach. The peatlands of the Peak District, southern Pennines, UK, have received significant atmospheric inputs of lead over the last few hundred years. A multi-core study at three peatland sites in the Peak District demonstrates significant within-site spatial variability in industrial lead pollution. Stochastic simulations reveal that 15 peat cores are required to calculate reliable lead inventories at the within-site and within-region scale for this highly polluted area of the southern Pennines. Within-site variability in lead pollution is dominant at the within-region scale. The study demonstrates that significant errors may be associated with peatland lead inventories at sites where only a single peat core has been used to calculate an inventory. Meaningful comparisons of lead inventories at the regional or global scale can only be made if the within-site variability of lead pollution has been quantified reliably.

  2. Origin of the large scale structures of the universe

    International Nuclear Information System (INIS)

    Oaknin, David H.

    2004-01-01

    We revise the statistical properties of the primordial cosmological density anisotropies that, at the time of matter-radiation equality, seeded the gravitational development of large scale structures in the otherwise homogeneous and isotropic Friedmann-Robertson-Walker flat universe. Our analysis shows that random fluctuations of the density field at the same instant of equality and with comoving wavelength shorter than the causal horizon at that time can naturally account, when globally constrained to conserve the total mass (energy) of the system, for the observed scale invariance of the anisotropies over cosmologically large comoving volumes. Statistical systems with similar features are generically known as glasslike or latticelike. Obviously, these conclusions conflict with the widely accepted understanding of the primordial structures reported in the literature, which requires an epoch of inflationary cosmology to precede the standard expansion of the universe. The origin of the conflict must be found in the widespread, but unjustified, claim that scale invariant mass (energy) anisotropies at the instant of equality over comoving volumes of cosmological size, larger than the causal horizon at the time, must be generated by fluctuations in the density field with comparably large comoving wavelength

  3. Factors Affecting the Rate of Penetration of Large-Scale Electricity Technologies: The Case of Carbon Sequestration

    Energy Technology Data Exchange (ETDEWEB)

    James R. McFarland; Howard J. Herzog

    2007-05-14

    This project falls under the Technology Innovation and Diffusion topic of the Integrated Assessment of Climate Change Research Program. The objective was to better understand the critical variables that affect the rate of penetration of large-scale electricity technologies in order to improve their representation in integrated assessment models. We conducted this research in six integrated tasks. In our first two tasks, we identified potential factors that affect penetration rates through discussions with modeling groups and through case studies of historical precedent. In the next three tasks, we investigated in detail three potential sets of critical factors: industrial conditions, resource conditions, and regulatory/environmental considerations. Research to assess the significance and relative importance of these factors involved the development of a microeconomic, system dynamics model of the US electric power sector. Finally, we implemented the penetration rate models in an integrated assessment model. While the focus of this effort is on carbon capture and sequestration technologies, much of the work will be applicable to other large-scale energy conversion technologies.

  4. Characterization of meter-scale spatial variability of riverbed hydraulic conductivity in a lowland river (Aa River, Belgium)

    Science.gov (United States)

    Ghysels, Gert; Benoit, Sien; Awol, Henock; Jensen, Evan Patrick; Debele Tolche, Abebe; Anibas, Christian; Huysmans, Marijke

    2018-04-01

    An improved general understanding of riverbed heterogeneity is of importance for all groundwater modeling studies that include river-aquifer interaction processes. Riverbed hydraulic conductivity (K) is one of the main factors controlling river-aquifer exchange fluxes. However, the meter-scale spatial variability of riverbed K has not been adequately mapped as of yet. This study aims to fill this void by combining an extensive field measurement campaign focusing on both horizontal and vertical riverbed K with a detailed geostatistical analysis of the meter-scale spatial variability of riverbed K . In total, 220 slug tests and 45 standpipe tests were performed at two test sites along the Belgian Aa River. Omnidirectional and directional variograms (along and across the river) were calculated. Both horizontal and vertical riverbed K vary over several orders of magnitude and show significant meter-scale spatial variation. Horizontal K shows a bimodal distribution. Elongated zones of high horizontal K along the river course are observed at both sections, indicating a link between riverbed structures, depositional environment and flow regime. Vertical K is lognormally distributed and its spatial variability is mainly governed by the presence and thickness of a low permeable organic layer at the top of the riverbed. The absence of this layer in the center of the river leads to high vertical K and is related to scouring of the riverbed by high discharge events. Variograms of both horizontal and vertical K show a clear directional anisotropy with ranges along the river being twice as large as those across the river.

  5. Scaling behavior and variable hopping conductivity in the quantum Hall plateau transition

    International Nuclear Information System (INIS)

    Tu, Tao; Zhao, Yong-Jie; Guo, Guo-Ping; Hao, Xiao-Jie; Guo, Guang-Can

    2007-01-01

    We have measured the temperature dependence of the longitudinal resistivity ρ xx of a two-dimensional electron system in the regime of the quantum Hall plateau transition. We extracted the quantitative form of scaling function for ρ xx and compared it with the results of ordinary scaling theory and variable range hopping based theory. We find that the two alternative theoretically proposed scaling functions are valid in different regions

  6. Technologies and challenges in large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Engholm-Keller, Kasper; Larsen, Martin Røssel

    2013-01-01

    become the main technique for discovery and characterization of phosphoproteins in a nonhypothesis driven fashion. In this review, we describe methods for state-of-the-art MS-based analysis of protein phosphorylation as well as the strategies employed in large-scale phosphoproteomic experiments...... with focus on the various challenges and limitations this field currently faces....

  7. Some ecological guidelines for large-scale biomass plantations

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, W.; Cook, J.H.; Beyea, J. [National Audubon Society, Tavernier, FL (United States)

    1993-12-31

    The National Audubon Society sees biomass as an appropriate and necessary source of energy to help replace fossil fuels in the near future, but is concerned that large-scale biomass plantations could displace significant natural vegetation and wildlife habitat, and reduce national and global biodiversity. We support the development of an industry large enough to provide significant portions of our energy budget, but we see a critical need to ensure that plantations are designed and sited in ways that minimize ecological disruption, or even provide environmental benefits. We have been studying the habitat value of intensively managed short-rotation tree plantations. Our results show that these plantations support large populations of some birds, but not all of the species using the surrounding landscape, and indicate that their value as habitat can be increased greatly by including small areas of mature trees within them. We believe short-rotation plantations can benefit regional biodiversity if they can be deployed as buffers for natural forests, or as corridors connecting forest tracts. To realize these benefits, and to avoid habitat degradation, regional biomass plantation complexes (e.g., the plantations supplying all the fuel for a powerplant) need to be planned, sited, and developed as large-scale units in the context of the regional landscape mosaic.

  8. Large-Scale and Global Hydrology. Chapter 92

    Science.gov (United States)

    Rodell, Matthew; Beaudoing, Hiroko Kato; Koster, Randal; Peters-Lidard, Christa D.; Famiglietti, James S.; Lakshmi, Venkat

    2016-01-01

    Powered by the sun, water moves continuously between and through Earths oceanic, atmospheric, and terrestrial reservoirs. It enables life, shapes Earths surface, and responds to and influences climate change. Scientists measure various features of the water cycle using a combination of ground, airborne, and space-based observations, and seek to characterize it at multiple scales with the aid of numerical models. Over time our understanding of the water cycle and ability to quantify it have improved, owing to advances in observational capabilities, the extension of the data record, and increases in computing power and storage. Here we present some of the most recent estimates of global and continental ocean basin scale water cycle stocks and fluxes and provide examples of modern numerical modeling systems and reanalyses.Further, we discuss prospects for predicting water cycle variability at seasonal and longer scales, which is complicated by a changing climate and direct human impacts related to water management and agriculture. Changes to the water cycle will be among the most obvious and important facets of climate change, thus it is crucial that we continue to invest in our ability to monitor it.

  9. Downscaling the Impacts of Large-Scale LUCC on Surface Temperature along with IPCC RCPs: A Global Perspective

    Directory of Open Access Journals (Sweden)

    Xiangzheng Deng

    2014-04-01

    Full Text Available This study focuses on the potential impacts of large-scale land use and land cover changes (LUCC on surface temperature from a global perspective. As important types of LUCC, urbanization, deforestation, cultivated land reclamation, and grassland degradation have effects on the climate, the potential changes of the surface temperature caused by these four types of large-scale LUCC from 2010 to 2050 are downscaled, and this issue analyzed worldwide along with Representative Concentration Pathways (RCPs of the Intergovernmental Panel on Climate Change (IPCC. The first case study presents some evidence of the effects of future urbanization on surface temperature in the Northeast megalopolis of the United States of America (USA. In order to understand the potential climatological variability caused by future forest deforestation and vulnerability, we chose Brazilian Amazon region as the second case study. The third selected region in India as a typical region of cultivated land reclamation where the possible climatic impacts are explored. In the fourth case study, we simulate the surface temperature changes caused by future grassland degradation in Mongolia. Results show that the temperature in built-up area would increase obviously throughout the four land types. In addition, the effects of all four large-scale LUCC on monthly average temperature change would vary from month to month with obviously spatial heterogeneity.

  10. Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.

    Science.gov (United States)

    Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H

    2011-06-06

    We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.

  11. Multi-scale glycemic variability: a link to gray matter atrophy and cognitive decline in type 2 diabetes.

    Directory of Open Access Journals (Sweden)

    Xingran Cui

    Full Text Available Type 2 diabetes mellitus (DM accelerates brain aging and cognitive decline. Complex interactions between hyperglycemia, glycemic variability and brain aging remain unresolved. This study investigated the relationship between glycemic variability at multiple time scales, brain volumes and cognition in type 2 DM.Forty-three older adults with and 26 without type 2 DM completed 72-hour continuous glucose monitoring, cognitive tests and anatomical MRI. We described a new analysis of continuous glucose monitoring, termed Multi-Scale glycemic variability (Multi-Scale GV, to examine glycemic variability at multiple time scales. Specifically, Ensemble Empirical Mode Decomposition was used to identify five unique ultradian glycemic variability cycles (GVC1-5 that modulate serum glucose with periods ranging from 0.5-12 hrs.Type 2 DM subjects demonstrated greater variability in GVC3-5 (period 2.0-12 hrs than controls (P<0.0001, during the day as well as during the night. Multi-Scale GV was related to conventional markers of glycemic variability (e.g. standard deviation and mean glycemic excursions, but demonstrated greater sensitivity and specificity to conventional markers, and was associated with worse long-term glycemic control (e.g. fasting glucose and HbA1c. Across all subjects, those with greater glycemic variability within higher frequency cycles (GVC1-3; 0.5-2.0 hrs had less gray matter within the limbic system and temporo-parietal lobes (e.g. cingulum, insular, hippocampus, and exhibited worse cognitive performance. Specifically within those with type 2 DM, greater glycemic variability in GVC2-3 was associated with worse learning and memory scores. Greater variability in GVC5 was associated with longer DM duration and more depression. These relationships were independent of HbA1c and hypoglycemic episodes.Type 2 DM is associated with dysregulation of glycemic variability over multiple scales of time. These time-scale-dependent glycemic fluctuations

  12. Large-scale building energy efficiency retrofit: Concept, model and control

    International Nuclear Information System (INIS)

    Wu, Zhou; Wang, Bo; Xia, Xiaohua

    2016-01-01

    BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.

  13. Worldwide large-scale fluctuations of sardine and anchovy ...

    African Journals Online (AJOL)

    Worldwide large-scale fluctuations of sardine and anchovy populations. ... African Journal of Marine Science. Journal Home · ABOUT THIS JOURNAL · Advanced ... Fullscreen Fullscreen Off. http://dx.doi.org/10.2989/AJMS.2008.30.1.13.463.

  14. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  15. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  16. 40 Variability Bugs in the Linux Kernel

    DEFF Research Database (Denmark)

    Abal Rivas, Iago; Brabrand, Claus; Wasowski, Andrzej

    2014-01-01

    Feature-sensitive verification is a recent field that pursues the effective analysis of the exponential number of variants of a program family. Today researchers lack examples of concrete bugs induced by variability, and occurring in real large-scale software. Such a collection of bugs is a requi......Feature-sensitive verification is a recent field that pursues the effective analysis of the exponential number of variants of a program family. Today researchers lack examples of concrete bugs induced by variability, and occurring in real large-scale software. Such a collection of bugs...... the outcome of our analysis into a database. In addition, we provide self-contained simplified C99 versions of the bugs, facilitating understanding and tool evaluation. Our study provides insights about the nature and occurrence of variability bugs in a large C software system, and shows in what ways...

  17. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    Science.gov (United States)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  18. The economics and environmental impacts of large-scale wind power in a carbon constrained world

    Science.gov (United States)

    Decarolis, Joseph Frank

    Serious climate change mitigation aimed at stabilizing atmospheric concentrations of CO2 will require a radical shift to a decarbonized energy supply. The electric power sector will be a primary target for deep reductions in CO2 emissions because electric power plants are among the largest and most manageable point sources of emissions. With respect to new capacity, wind power is currently one of the most inexpensive ways to produce electricity without CO2 emissions and it may have a significant role to play in a carbon constrained world. Yet most research in the wind industry remains focused on near term issues, while energy system models that focus on century-long time horizons undervalue wind by imposing exogenous limits on growth. This thesis fills a critical gap in the literature by taking a closer look at the cost and environmental impacts of large-scale wind. Estimates of the average cost of wind generation---now roughly 4¢/kWh---do not address the cons arising from the spatial distribution and intermittency of wind. This thesis develops a theoretical framework for assessing the intermittency cost of wind. In addition, an economic characterization of a wind system is provided in which long-distance electricity transmission, storage, and gas turbines are used to supplement variable wind power output to meet a time-varying load. With somewhat optimistic assumptions about the cost of wind turbines, the use of wind to serve 50% of demand adds ˜1--2¢/kWh to the cost of electricity, a cost comparable to that of other large-scale low carbon technologies. This thesis also explores the environmental impacts posed by large-scale wind. Though avian mortality and noise caused controversy in the early years of wind development, improved technology and exhaustive siting assessments have minimized their impact. The aesthetic valuation of wind farms can be improved significantly with better design, siting, construction, and maintenance procedures, but opposition may

  19. Large Scale Visual Recommendations From Street Fashion Images

    OpenAIRE

    Jagadeesh, Vignesh; Piramuthu, Robinson; Bhardwaj, Anurag; Di, Wei; Sundaresan, Neel

    2014-01-01

    We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose four data driven models in the form of Complementary Nearest Neighbor Consensus, Gaussian Mixture Models, Texture Agnostic Retrieval and Markov Chain LDA for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive e...

  20. ``Large''- vs Small-scale friction control in turbulent channel flow

    Science.gov (United States)

    Canton, Jacopo; Örlü, Ramis; Chin, Cheng; Schlatter, Philipp

    2017-11-01

    We reconsider the ``large-scale'' control scheme proposed by Hussain and co-workers (Phys. Fluids 10, 1049-1051 1998 and Phys. Rev. Fluids, 2, 62601 2017), using new direct numerical simulations (DNS). The DNS are performed in a turbulent channel at friction Reynolds number Reτ of up to 550 in order to eliminate low-Reynolds-number effects. The purpose of the present contribution is to re-assess this control method in the light of more modern developments in the field, in particular also related to the discovery of (very) large-scale motions. The goals of the paper are as follows: First, we want to better characterise the physics of the control, and assess what external contribution (vortices, forcing, wall motion) are actually needed. Then, we investigate the optimal parameters and, finally, determine which aspects of this control technique actually scale in outer units and can therefore be of use in practical applications. In addition to discussing the mentioned drag-reduction effects, the present contribution will also address the potential effect of the naturally occurring large-scale motions on frictional drag, and give indications on the physical processes for potential drag reduction possible at all Reynolds numbers.