WorldWideScience

Sample records for large-scale precipitation estimation

  1. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  2. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  3. Long-Term Large-Scale Bias-Adjusted Precipitation Estimates at High Spatial and Temporal Resolution Derived from the National Mosaic and Multi-Sensor QPE (NMQ/Q2) Precipitation Reanalysis over CONUS

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Seo, D. J.; Kim, B.

    2014-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over Continental United States (CONUS) is nearly completed for the period covering from 2000 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Rain gauge networks such as the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), the Climate Reference Network (CRN), and the Global Historical Climatology Network - Daily (GHCN-D) are used to adjust for those biases and to merge with the radar only product to provide a multi-sensor estimate. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. After assessing the bias and applying reduction or elimination techniques, we are investigating the kriging method and its variants such as simple kriging (SK), ordinary kriging (OK), and conditional bias-penalized Kriging (CBPK) among others. In addition we hope to generate estimates of uncertainty for the gridded estimate. In this work the methodology is presented as well as a comparison between the radar-only product and the final multi-sensor QPE product. The comparison is performed at various time scales from the sub-hourly, to annual. In addition, comparisons over the same period with a suite of lower resolution QPEs derived from ground based radar

  4. Large-Scale Covariability Between Aerosol and Precipitation Over the 7-SEAS Region: Observations and Simulations

    Science.gov (United States)

    Huang, Jingfeng; Hsu, N. Christina; Tsay, Si-Chee; Zhang, Chidong; Jeong, Myeong Jae; Gautam, Ritesh; Bettenhausen, Corey; Sayer, Andrew M.; Hansell, Richard A.; Liu, Xiaohong; hide

    2012-01-01

    One of the seven scientific areas of interests of the 7-SEAS field campaign is to evaluate the impact of aerosol on cloud and precipitation (http://7-seas.gsfc.nasa.gov). However, large-scale covariability between aerosol, cloud and precipitation is complicated not only by ambient environment and a variety of aerosol effects, but also by effects from rain washout and climate factors. This study characterizes large-scale aerosol-cloud-precipitation covariability through synergy of long-term multi ]sensor satellite observations with model simulations over the 7-SEAS region [10S-30N, 95E-130E]. Results show that climate factors such as ENSO significantly modulate aerosol and precipitation over the region simultaneously. After removal of climate factor effects, aerosol and precipitation are significantly anti-correlated over the southern part of the region, where high aerosols loading is associated with overall reduced total precipitation with intensified rain rates and decreased rain frequency, decreased tropospheric latent heating, suppressed cloud top height and increased outgoing longwave radiation, enhanced clear-sky shortwave TOA flux but reduced all-sky shortwave TOA flux in deep convective regimes; but such covariability becomes less notable over the northern counterpart of the region where low ]level stratus are found. Using CO as a proxy of biomass burning aerosols to minimize the washout effect, large-scale covariability between CO and precipitation was also investigated and similar large-scale covariability observed. Model simulations with NCAR CAM5 were found to show similar effects to observations in the spatio-temporal patterns. Results from both observations and simulations are valuable for improving our understanding of this region's meteorological system and the roles of aerosol within it. Key words: aerosol; precipitation; large-scale covariability; aerosol effects; washout; climate factors; 7- SEAS; CO; CAM5

  5. Effects of large-scale deforestation on precipitation in the monsoon regions: remote versus local effects.

    Science.gov (United States)

    Devaraju, N; Bala, Govindasamy; Modak, Angshuman

    2015-03-17

    In this paper, using idealized climate model simulations, we investigate the biogeophysical effects of large-scale deforestation on monsoon regions. We find that the remote forcing from large-scale deforestation in the northern middle and high latitudes shifts the Intertropical Convergence Zone southward. This results in a significant decrease in precipitation in the Northern Hemisphere monsoon regions (East Asia, North America, North Africa, and South Asia) and moderate precipitation increases in the Southern Hemisphere monsoon regions (South Africa, South America, and Australia). The magnitude of the monsoonal precipitation changes depends on the location of deforestation, with remote effects showing a larger influence than local effects. The South Asian Monsoon region is affected the most, with 18% decline in precipitation over India. Our results indicate that any comprehensive assessment of afforestation/reforestation as climate change mitigation strategies should carefully evaluate the remote effects on monsoonal precipitation alongside the large local impacts on temperatures.

  6. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  7. Identification of large-scale meteorological patterns associated with extreme precipitation in the US northeast

    Science.gov (United States)

    Agel, Laurie; Barlow, Mathew; Feldstein, Steven B.; Gutowski, William J.

    2018-03-01

    Patterns of daily large-scale circulation associated with Northeast US extreme precipitation are identified using both k-means clustering (KMC) and Self-Organizing Maps (SOM) applied to tropopause height. The tropopause height provides a compact representation of the upper-tropospheric potential vorticity, which is closely related to the overall evolution and intensity of weather systems. Extreme precipitation is defined as the top 1% of daily wet-day observations at 35 Northeast stations, 1979-2008. KMC is applied on extreme precipitation days only, while the SOM algorithm is applied to all days in order to place the extreme results into the overall context of patterns for all days. Six tropopause patterns are identified through KMC for extreme day precipitation: a summertime tropopause ridge, a summertime shallow trough/ridge, a summertime shallow eastern US trough, a deeper wintertime eastern US trough, and two versions of a deep cold-weather trough located across the east-central US. Thirty SOM patterns for all days are identified. Results for all days show that 6 SOM patterns account for almost half of the extreme days, although extreme precipitation occurs in all SOM patterns. The same SOM patterns associated with extreme precipitation also routinely produce non-extreme precipitation; however, on extreme precipitation days the troughs, on average, are deeper and the downstream ridges more pronounced. Analysis of other fields associated with the large-scale patterns show various degrees of anomalously strong moisture transport preceding, and upward motion during, extreme precipitation events.

  8. Efficient Topology Estimation for Large Scale Optical Mapping

    CERN Document Server

    Elibol, Armagan; Garcia, Rafael

    2013-01-01

    Large scale optical mapping methods are in great demand among scientists who study different aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seafloor. Cost and weight constraints mean that low-cost ROVs usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predefined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This book contributes to the state-of-art in large area image mosaicing methods for underwater surveys using low-cost vehicles equipped with a very limited sensor suite. The main focus has been on global alignment...

  9. Large-scale Meteorological Patterns Associated with Extreme Precipitation Events over Portland, OR

    Science.gov (United States)

    Aragon, C.; Loikith, P. C.; Lintner, B. R.; Pike, M.

    2017-12-01

    Extreme precipitation events can have profound impacts on human life and infrastructure, with broad implications across a range of stakeholders. Changes to extreme precipitation events are a projected outcome of climate change that warrants further study, especially at regional- to local-scales. While global climate models are generally capable of simulating mean climate at global-to-regional scales with reasonable skill, resiliency and adaptation decisions are made at local-scales where most state-of-the-art climate models are limited by coarse resolution. Characterization of large-scale meteorological patterns associated with extreme precipitation events at local-scales can provide climatic information without this scale limitation, thus facilitating stakeholder decision-making. This research will use synoptic climatology as a tool by which to characterize the key large-scale meteorological patterns associated with extreme precipitation events in the Portland, Oregon metro region. Composite analysis of meteorological patterns associated with extreme precipitation days, and associated watershed-specific flooding, is employed to enhance understanding of the climatic drivers behind such events. The self-organizing maps approach is then used to characterize the within-composite variability of the large-scale meteorological patterns associated with extreme precipitation events, allowing us to better understand the different types of meteorological conditions that lead to high-impact precipitation events and associated hydrologic impacts. A more comprehensive understanding of the meteorological drivers of extremes will aid in evaluation of the ability of climate models to capture key patterns associated with extreme precipitation over Portland and to better interpret projections of future climate at impact-relevant scales.

  10. How do the multiple large-scale climate oscillations trigger extreme precipitation?

    Science.gov (United States)

    Shi, Pengfei; Yang, Tao; Xu, Chong-Yu; Yong, Bin; Shao, Quanxi; Li, Zhenya; Wang, Xiaoyan; Zhou, Xudong; Li, Shu

    2017-10-01

    Identifying the links between variations in large-scale climate patterns and precipitation is of tremendous assistance in characterizing surplus or deficit of precipitation, which is especially important for evaluation of local water resources and ecosystems in semi-humid and semi-arid regions. Restricted by current limited knowledge on underlying mechanisms, statistical correlation methods are often used rather than physical based model to characterize the connections. Nevertheless, available correlation methods are generally unable to reveal the interactions among a wide range of climate oscillations and associated effects on precipitation, especially on extreme precipitation. In this work, a probabilistic analysis approach by means of a state-of-the-art Copula-based joint probability distribution is developed to characterize the aggregated behaviors for large-scale climate patterns and their connections to precipitation. This method is employed to identify the complex connections between climate patterns (Atlantic Multidecadal Oscillation (AMO), El Niño-Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO)) and seasonal precipitation over a typical semi-humid and semi-arid region, the Haihe River Basin in China. Results show that the interactions among multiple climate oscillations are non-uniform in most seasons and phases. Certain joint extreme phases can significantly trigger extreme precipitation (flood and drought) owing to the amplification effect among climate oscillations.

  11. African aerosol and large-scale precipitation variability over West Africa

    International Nuclear Information System (INIS)

    Huang Jingfeng; Zhang Chidong; Prospero, Joseph M

    2009-01-01

    We investigated the large-scale connection between African aerosol and precipitation in the West African Monsoon (WAM) region using 8-year (2000-2007) monthly and daily Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol products (aerosol optical depth, fine mode fraction) and Tropical Rainfall Measuring Mission (TRMM) precipitation and rain type. These high-quality data further confirmed our previous results that the large-scale link between aerosol and precipitation in this region undergoes distinct seasonal and spatial variability. Previously detected suppression of precipitation during months of high aerosol concentration occurs in both convective and stratiform rain, but not systematically in shallow rain. This suggests the suppression of deep convection due to the aerosol. Based on the seasonal cycle of dust and smoke and their geographical distribution, our data suggest that both dust (coarse mode aerosol) and smoke (fine mode aerosol) contribute to the precipitation suppression. However, the dust effect is evident over the Gulf of Guinea while the smoke effect is evident over both land and ocean. A back trajectory analysis further demonstrates that the precipitation reduction is statistically linked to the upwind aerosol concentration. This study suggests that African aerosol outbreaks in the WAM region can influence precipitation in the local monsoon system which has direct societal impact on the local community. It calls for more systematic investigations to determine the modulating mechanisms using both observational and modeling approaches.

  12. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  13. Five hundred years of gridded high-resolution precipitation reconstructions over Europe and the connection to large-scale circulation

    Energy Technology Data Exchange (ETDEWEB)

    Pauling, Andreas [University of Bern, Institute of Geography, Bern (Switzerland); Luterbacher, Juerg; Wanner, Heinz [University of Bern, Institute of Geography, Bern (Switzerland); National Center of Competence in Research (NCCR) in Climate, Bern (Switzerland); Casty, Carlo [University of Bern, Climate and Environmental Physics Institute, Bern (Switzerland)

    2006-03-15

    We present seasonal precipitation reconstructions for European land areas (30 W to 40 E/30-71 N; given on a 0.5 x 0.5 resolved grid) covering the period 1500-1900 together with gridded reanalysis from 1901 to 2000 (Mitchell and Jones 2005). Principal component regression techniques were applied to develop this dataset. A large variety of long instrumental precipitation series, precipitation indices based on documentary evidence and natural proxies (tree-ring chronologies, ice cores, corals and a speleothem) that are sensitive to precipitation signals were used as predictors. Transfer functions were derived over the 1901-1983 calibration period and applied to 1500-1900 in order to reconstruct the large-scale precipitation fields over Europe. The performance (quality estimation based on unresolved variance within the calibration period) of the reconstructions varies over centuries, seasons and space. Highest reconstructive skill was found for winter over central Europe and the Iberian Peninsula. Precipitation variability over the last half millennium reveals both large interannual and decadal fluctuations. Applying running correlations, we found major non-stationarities in the relation between large-scale circulation and regional precipitation. For several periods during the last 500 years, we identified key atmospheric modes for southern Spain/northern Morocco and central Europe as representations of two precipitation regimes. Using scaled composite analysis, we show that precipitation extremes over central Europe and southern Spain are linked to distinct pressure patterns. Due to its high spatial and temporal resolution, this dataset allows detailed studies of regional precipitation variability for all seasons, impact studies on different time and space scales, comparisons with high-resolution climate models as well as analysis of connections with regional temperature reconstructions. (orig.)

  14. The large-scale process of microbial carbonate precipitation for nickel remediation from an industrial soil.

    Science.gov (United States)

    Zhu, Xuejiao; Li, Weila; Zhan, Lu; Huang, Minsheng; Zhang, Qiuzhuo; Achal, Varenyam

    2016-12-01

    Microbial carbonate precipitation is known as an efficient process for the remediation of heavy metals from contaminated soils. In the present study, a urease positive bacterial isolate, identified as Bacillus cereus NS4 through 16S rDNA sequencing, was utilized on a large scale to remove nickel from industrial soil contaminated by the battery industry. The soil was highly contaminated with an initial total nickel concentration of approximately 900 mg kg -1 . The soluble-exchangeable fraction was reduced to 38 mg kg -1 after treatment. The primary objective of metal stabilization was achieved by reducing the bioavailability through immobilizing the nickel in the urease-driven carbonate precipitation. The nickel removal in the soils contributed to the transformation of nickel from mobile species into stable biominerals identified as calcite, vaterite, aragonite and nickelous carbonate when analyzed under XRD. It was proven that during precipitation of calcite, Ni 2+ with an ion radius close to Ca 2+ was incorporated into the CaCO 3 crystal. The biominerals were also characterized by using SEM-EDS to observe the crystal shape and Raman-FTIR spectroscopy to predict responsible bonding during bioremediation with respect to Ni immobilization. The electronic structure and chemical-state information of the detected elements during MICP bioremediation process was studied by XPS. This is the first study in which microbial carbonate precipitation was used for the large-scale remediation of metal-contaminated industrial soil. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. State estimation for large-scale wastewater treatment plants.

    Science.gov (United States)

    Busch, Jan; Elixmann, David; Kühl, Peter; Gerkens, Carine; Schlöder, Johannes P; Bock, Hans G; Marquardt, Wolfgang

    2013-09-01

    Many relevant process states in wastewater treatment are not measurable, or their measurements are subject to considerable uncertainty. This poses a serious problem for process monitoring and control. Model-based state estimation can provide estimates of the unknown states and increase the reliability of measurements. In this paper, an integrated approach is presented for the optimization-based sensor network design and the estimation problem. Using the ASM1 model in the reference scenario BSM1, a cost-optimal sensor network is designed and the prominent estimators EKF and MHE are evaluated. Very good estimation results for the system comprising 78 states are found requiring sensor networks of only moderate complexity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Contribution of large-scale midlatitude disturbances to hourly precipitation extremes in the United States

    Science.gov (United States)

    Barbero, Renaud; Abatzoglou, John T.; Fowler, Hayley J.

    2018-02-01

    Midlatitude synoptic weather regimes account for a substantial portion of annual precipitation accumulation as well as multi-day precipitation extremes across parts of the United States (US). However, little attention has been devoted to understanding how synoptic-scale patterns contribute to hourly precipitation extremes. A majority of 1-h annual maximum precipitation (AMP) across the western US were found to be linked to two coherent midlatitude synoptic patterns: disturbances propagating along the jet stream, and cutoff upper-level lows. The influence of these two patterns on 1-h AMP varies geographically. Over 95% of 1-h AMP along the western coastal US were coincident with progressive midlatitude waves embedded within the jet stream, while over 30% of 1-h AMP across the interior western US were coincident with cutoff lows. Between 30-60% of 1-h AMP were coincident with the jet stream across the Ohio River Valley and southeastern US, whereas a a majority of 1-h AMP over the rest of central and eastern US were not found to be associated with either midlatitude synoptic features. Composite analyses for 1-h AMP days coincident to cutoff lows and jet stream show that an anomalous moisture flux and upper-level dynamics are responsible for initiating instability and setting up an environment conducive to 1-h AMP events. While hourly precipitation extremes are generally thought to be purely convective in nature, this study shows that large-scale dynamics and baroclinic disturbances may also contribute to precipitation extremes on sub-daily timescales.

  17. United States Temperature and Precipitation Extremes: Phenomenology, Large-Scale Organization, Physical Mechanisms and Model Representation

    Science.gov (United States)

    Black, R. X.

    2017-12-01

    We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.

  18. Quantification of Linkages between Large-Scale Climate Patterns and Annual Precipitation for the Colorado River Basin

    Science.gov (United States)

    Kalra, A.; Ahmad, S.

    2010-12-01

    Precipitation is regarded as one of the key variables driving various hydrologic processes and the future precipitation information can be useful to better understand the long-term climate dynamics. In this paper, a simple, robust, and parsimonious precipitation forecast model, Support Vector Machine (SVM) is proposed which uses large-scale climate information and predict annual precipitation 1-year in advance. SVM’s are a novel class of neural networks (NNs) which are based on the statistical learning theory. The SVM’s has three main advantages over the traditional NNs: 1) better generalization ability, 2) the architecture and weights of SVM’s are guaranteed to be unique and globally optimum, and 3) SVM’s are trained more rapidly than the corresponding NN. With these advantages, an application of SVM incorporating large-scale climate information is developed and applied to seventeen climate divisions encompassing the Colorado River Basin in the western United States. Annual oceanic-atmospheric indices, comprising of Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), Atlantic Multidecadal Oscillation (AMO), and El Nino-Southern Oscillations (ENSO) for a period of 1900-2007 are used to generate annual precipitation estimates with 1-year lead time. The results from the present study indicate that long-term precipitation predictions for the Upper Colorado River Basin can be successfully obtained using a combination of NAO and ENSO indices whereas coupling PDO and AMO results in improved precipitation predictions for the Lower Colorado River Basin. Precipitation predictions from the SVM model are found to be better when compared with the predictions obtained from feed-forward back propagation Artificial Neural Network and Multivariate Linear Regression models. The overall results of this study revealed that the annual precipitation of the Colorado River Basin was significantly influenced by oceanic-atmospheric oscillations and the proposed SVM

  19. Cosmological Parameter Estimation with Large Scale Structure Observations

    CERN Document Server

    Di Dio, Enea; Durrer, Ruth; Lesgourgues, Julien

    2014-01-01

    We estimate the sensitivity of future galaxy surveys to cosmological parameters, using the redshift dependent angular power spectra of galaxy number counts, $C_\\ell(z_1,z_2)$, calculated with all relativistic corrections at first order in perturbation theory. We pay special attention to the redshift dependence of the non-linearity scale and present Fisher matrix forecasts for Euclid-like and DES-like galaxy surveys. We compare the standard $P(k)$ analysis with the new $C_\\ell(z_1,z_2)$ method. We show that for surveys with photometric redshifts the new analysis performs significantly better than the $P(k)$ analysis. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. We also identify surveys which can measure the lensing contribution and we study the monopole, $C_0(z_1,z_2)$.

  20. A climatological analysis of high-precipitation events in Dronning Maud Land, Antarctica, and associated large-scale atmospheric conditions

    NARCIS (Netherlands)

    Welker, Christoph; Martius, Olivia; Froidevaux, Paul; Reijmer, Carleen H.; Fischer, Hubertus

    2014-01-01

    The link between high precipitation in Dronning Maud Land (DML), Antarctica, and the large-scale atmospheric circulation is investigated using ERA-Interim data for 1979-2009. High-precipitation events are analyzed at Halvfarryggen situated in the coastal region of DML and at Kohnen Station located

  1. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    Science.gov (United States)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  2. Large Scale Evapotranspiration Estimates: An Important Component in Regional Water Balances to Assess Water Availability

    Science.gov (United States)

    Garatuza-Payan, J.; Yepez, E. A.; Watts, C.; Rodriguez, J. C.; Valdez-Torres, L. C.; Robles-Morua, A.

    2013-05-01

    Water security, can be defined as the reliable supply in quantity and quality of water to help sustain future populations and maintaining ecosystem health and productivity. Water security is rapidly declining in many parts of the world due to population growth, drought, climate change, salinity, pollution, land use change, over-allocation and over-utilization, among other issues. Governmental offices (such as the Comision Nacional del Agua in Mexico, CONAGUA) require and conduct studies to estimate reliable water balances at regional or continental scales in order to provide reasonable assessments of the amount of water that can be provided (from surface or ground water sources) to supply all the human needs while maintaining natural vegetation, on an operational basis and, more important, under disturbances, such as droughts. Large scale estimates of evapotranspiration (ET), a critical component of the water cycle, are needed for a better comprehension of the hydrological cycle at large scales, which, in most water balances is left as the residual. For operational purposes, such water balance estimates can not rely on ET measurements since they do not exist, should be simple and require the least ground information possible, information that is often scarce or does not exist at all. Given this limitation, the use of remotely sensed data to estimate ET could supplement the lack of ground information, particularly in remote regions In this study, a simple method, based on the Makkink equation is used to estimate ET for large areas at high spatial resolutions (1 km). The Makkink model used here is forced using three remotely sensed datasets. First, the model uses solar radiation estimates obtained from the Geostationary Operational Environmental Satellite (GOES); Second, the model uses an Enhanced Vegetation Index (EVI) obtained from the Moderate-resolution Imaging Spectroradiometer (MODIS) normalized to get an estimate for vegetation amount and land use which was

  3. Non-stationary analysis of the frequency and intensity of heavy precipitation over Canada and their relations to large-scale climate patterns

    Science.gov (United States)

    Tan, Xuezhi; Gan, Thian Yew

    2017-05-01

    In recent years, because the frequency and severity of floods have increased across Canada, it is important to understand the characteristics of Canadian heavy precipitation. Long-term precipitation data of 463 gauging stations of Canada were analyzed using non-stationary generalized extreme value distribution (GEV), Poisson distribution and generalized Pareto (GP) distribution. Time-varying covariates that represent large-scale climate patterns such as El Niño Southern Oscillation (ENSO), North Atlantic Oscillation (NAO), Pacific decadal oscillation (PDO) and North Pacific Oscillation (NP) were incorporated to parameters of GEV, Poisson and GP distributions. Results show that GEV distributions tend to under-estimate annual maximum daily precipitation (AMP) of western and eastern coastal regions of Canada, compared to GP distributions. Poisson regressions show that temporal clusters of heavy precipitation events in Canada are related to large-scale climate patterns. By modeling AMP time series with non-stationary GEV and heavy precipitation with non-stationary GP distributions, it is evident that AMP and heavy precipitation of Canada show strong non-stationarities (abrupt and slowly varying changes) likely because of the influence of large-scale climate patterns. AMP in southwestern coastal regions, southern Canadian Prairies and the Great Lakes tend to be higher in El Niño than in La Niña years, while AMP of other regions of Canada tends to be lower in El Niño than in La Niña years. The influence of ENSO on heavy precipitation was spatially consistent but stronger than on AMP. The effect of PDO, NAO and NP on extreme precipitation is also statistically significant at some stations across Canada.

  4. Parameter and State Estimation of Large-Scale Complex Systems Using Python Tools

    Directory of Open Access Journals (Sweden)

    M. Anushka S. Perera

    2015-07-01

    Full Text Available This paper discusses the topics related to automating parameter, disturbance and state estimation analysis of large-scale complex nonlinear dynamic systems using free programming tools. For large-scale complex systems, before implementing any state estimator, the system should be analyzed for structural observability and the structural observability analysis can be automated using Modelica and Python. As a result of structural observability analysis, the system may be decomposed into subsystems where some of them may be observable --- with respect to parameter, disturbances, and states --- while some may not. The state estimation process is carried out for those observable subsystems and the optimum number of additional measurements are prescribed for unobservable subsystems to make them observable. In this paper, an industrial case study is considered: the copper production process at Glencore Nikkelverk, Kristiansand, Norway. The copper production process is a large-scale complex system. It is shown how to implement various state estimators, in Python, to estimate parameters and disturbances, in addition to states, based on available measurements.

  5. Large-scale connection between aerosol optical depth and summer monsoon circulation, and precipitation over northeast Asia

    Science.gov (United States)

    Kim, Sang-Woo; Yoon, Soon-Chang; Choi, Suk-Jin; Choi, In-Jin

    2010-05-01

    We investigated the large-scale connection between columnar aerosol loads and summer monsoon circulation, and also the precipitation over northeast Asia using aerosol optical depth (AOD) data obtained from the 8-year MODIS, AERONET Sun/sky radiometer, and precipitation data acquired under the Global Precipitation Climatology Project (GPCP). These high-quality data revealed the large-scale link between AOD and summer monsoon circulation, precipitation in July over northeast Asian countries, and their distinct spatial and annual variabilities. Compared to the mean AOD for the entire period of 2001-2008, the increase of almost 40-50% in the AOD value in July 2005 and July 2007 was found over the downwind regions of China (Yellow Sea, Korean peninsula, and East Sea), with negative precipitation anomalies. This can be attributable to the strong westerly confluent flows, between cyclone flows by continental thermal low centered over the northern China and anti-cyclonic flows by the western North Pacific High, which transport anthropogenic pollution aerosols emitted from east China to aforementioned downwind high AOD regions along the rim of the Pacific marine airmass. In July 2002, however, the easterly flows transported anthropogenic aerosols from east China to the southwestern part of China in July 2002. As a result, the AOD off the coast of China was dramatically reduced in spite of decreasing rainfall. From the calculation of the cross-correlation coefficient between MODIS-derived AOD anomalies and GPCP precipitation anomalies over the period 2001-2008, we found negative correlations over the areas encompassed by 105-115E and 30-35N and by 120-140E and 35-40N (Yellow Sea, Korean peninsula, and East Sea). This suggests that aerosol loads over these regions are easily influenced by the Asian monsoon flow system and associated precipitation.

  6. Large Scale Influences on Summertime Extreme Precipitation in the Northeastern United States

    Science.gov (United States)

    Collow, Allison B. Marquardt; Bosilovich, Michael G.; Koster, Randal Dean

    2016-01-01

    Observations indicate that over the last few decades there has been a statistically significant increase in precipitation in the northeastern United States and that this can be attributed to an increase in precipitation associated with extreme precipitation events. Here a state-of-the-art atmospheric reanalysis is used to examine such events in detail. Daily extreme precipitation events defined at the 75th and 95th percentile from gridded gauge observations are identified for a selected region within the Northeast. Atmospheric variables from the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2), are then composited during these events to illustrate the time evolution of associated synoptic structures, with a focus on vertically integrated water vapor fluxes, sea level pressure, and 500-hectopascal heights. Anomalies of these fields move into the region from the northwest, with stronger anomalies present in the 95th percentile case. Although previous studies show tropical cyclones are responsible for the most intense extreme precipitation events, only 10 percent of the events in this study are caused by tropical cyclones. On the other hand, extreme events resulting from cutoff low pressure systems have increased. The time period of the study was divided in half to determine how the mean composite has changed over time. An arc of lower sea level pressure along the East Coast and a change in the vertical profile of equivalent potential temperature suggest a possible increase in the frequency or intensity of synoptic-scale baroclinic disturbances.

  7. The interannual precipitation variability in the southern part of Iran as linked to large-scale climate modes

    Energy Technology Data Exchange (ETDEWEB)

    Pourasghar, Farnaz; Jahanbakhsh, Saeed; Sari Sarraf, Behrooz [The University of Tabriz, Department of Physical Geography, Faculty of Humanities and Social Science, Tabriz (Iran, Islamic Republic of); Tozuka, Tomoki [The University of Tokyo, Department of Earth and Planetary Science, Graduate School of Science, Tokyo (Japan); Ghaemi, Hooshang [Iran Meteorological Organization, Tehran (Iran, Islamic Republic of); Yamagata, Toshio [The University of Tokyo, Department of Earth and Planetary Science, Graduate School of Science, Tokyo (Japan); Application Laboratory/JAMSTEC, Yokohama, Kanagawa (Japan)

    2012-11-15

    The interannual variation of precipitation in the southern part of Iran and its link with the large-scale climate modes are examined using monthly data from 183 meteorological stations during 1974-2005. The majority of precipitation occurs during the rainy season from October to May. The interannual variation in fall and early winter during the first part of the rainy season shows apparently a significant positive correlation with the Indian Ocean Dipole (IOD) and El Nino-Southern Oscillation (ENSO). However, a partial correlation analysis used to extract the respective influence of IOD and ENSO shows a significant positive correlation only with the IOD and not with ENSO. The southeasterly moisture flux anomaly over the Arabian Sea turns anti-cyclonically and transport more moisture to the southern part of Iran from the Arabian Sea, the Red Sea, and the Persian Gulf during the positive IOD. On the other hand, the moisture flux has northerly anomaly over Iran during the negative IOD, which results in reduced moisture supply from the south. During the latter part of the rainy season in late winter and spring, the interannual variation of precipitation is more strongly influenced by modes of variability over the Mediterranean Sea. The induced large-scale atmospheric circulation anomaly controls moisture supply from the Red Sea and the Persian Gulf. (orig.)

  8. Combinations of large-scale circulation anomalies conducive to precipitation extremes in the Czech Republic

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Marek; Müller, Miloslav

    2014-01-01

    Roč. 138, March 2014 (2014), s. 205-212 ISSN 0169-8095 R&D Projects: GA ČR(CZ) GAP209/11/1990 Institutional support: RVO:68378289 Keywords : precipitation extreme * synoptic-scale cause * re-analysis * circulation anomaly Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.844, year: 2014 http://www.sciencedirect.com/science/article/pii/S0169809513003372

  9. Mediterranean hurricanes: large-scale environment and convective and precipitating areas from satellite microwave observations

    Directory of Open Access Journals (Sweden)

    C. Claud

    2010-10-01

    Full Text Available Subsynoptic scale vortices that have been likened to tropical cyclones or polar lows (medicanes are occasionally observed over the Mediterranean Sea. Generated over the sea, they are usually associated with strong winds and heavy precipitation and thus can be highly destructive in islands and costal areas. Only an accurate forecasting of such systems could mitigate these effects. However, at the moment, the predictability of these systems remains limited.

    Due to the scarcity of conventional observations, use is made of NOAA/MetOp satellite observations, for which advantage can be taken of the time coverage differences between the platforms that carry it, to give a very complete temporal description of the disturbances. A combination of AMSU-B (Advanced Microwave Sounding Unit-B/MHS (Microwave Humidity Sounder observations permit to investigate precipitation associated with these systems while coincident AMSU-A (Advanced Microwave Sounding Unit-A observations give insights into the larger synoptic-scale environment in which they occur.

    Three different cases (in terms of intensity, location, trajectory, duration, and periods of the year – May, September and December, respectively were investigated. Throughout these time periods, AMSU-A observations show that the persisting deep outflow of cold air over the sea together with an upper-level trough upstream constituted a favourable environment for the development of medicanes. AMSU-B/MHS based diagnostics show that convection and precipitation areas are large in the early stage of the low, but significantly reduced afterwards. Convection is maximum just after the upper-level trough, located upstream of cold mid-tropospheric air, reached its maximum intensity and acquired a cyclonic orientation.

  10. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP

    Science.gov (United States)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-01

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version of the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. Other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.

  11. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  12. Estimating GHG emission mitigation supply curves of large-scale biomass use on a country level

    International Nuclear Information System (INIS)

    Dornburg, Veronika; Dam, Jinke van; Faaij, Andre

    2007-01-01

    This study evaluates the possible influences of a large-scale introduction of biomass material and energy systems and their market volumes on land, material and energy market prices and their feedback to greenhouse gas (GHG) emission mitigation costs. GHG emission mitigation supply curves for large-scale biomass use were compiled using a methodology that combines a bottom-up analysis of biomass applications, biomass cost supply curves and market prices of land, biomaterials and bioenergy carriers. These market prices depend on the scale of biomass use and the market volume of materials and energy carriers and were estimated using own-price elasticities of demand. The methodology was demonstrated for a case study of Poland in the year 2015 applying different scenarios on economic development and trade in Europe. For the key technologies considered, i.e. medium density fibreboard, poly lactic acid, electricity and methanol production, GHG emission mitigation costs increase strongly with the scale of biomass production. Large-scale introduction of biomass use decreases the GHG emission reduction potential at costs below 50 Euro /Mg CO 2eq with about 13-70% depending on the scenario. Biomaterial production accounts for only a small part of this GHG emission reduction potential due to relatively small material markets and the subsequent strong decrease of biomaterial market prices at large scale of production. GHG emission mitigation costs depend strongly on biomass supply curves, own-price elasticity of land and market volumes of bioenergy carriers. The analysis shows that these influences should be taken into account for developing biomass implementations strategies

  13. Climate dynamics of South America during summer: Connections between the large-scale circulation and regional precipitation

    Science.gov (United States)

    Lenters, Johh Derick

    1997-05-01

    Relationships between the large-scale circulation and regional precipitation over South America during austral summer are examined using a GCM, linear model, and observational analyses. Emphasis is placed on understanding the origin of upper-tropospheric circulation features such as the Bolivian high and its effects on South American precipitation variability, particularly on the Central Andean Altiplano. Results from the linear model indicate that the Bolivian high and 'Nordeste low' are generated in response to precipitation over the Amazon basin, Central Andes, and South Atlantic convergence zone (SACZ), with African precipitation also playing a crucial role in the formation of the low. The direct mechanical and sensible heating effects of the Andes are minimal, acting only to induce a weak lee trough in midlatitudes and a shallow monsoonal circulation over the Central Andes. In the GCM the effects of the Andes include a strengthening of the Bolivian high and northward shift of the Nordeste low, primarily through changes in the precipitation field. The position of the Bolivian high is primarily determined by Amazonian precipitation and is little affected by the removal of the Andes. Strong subsidence to the west of the high is found to be important for the maintenance of the high's warm core, while large-scale convective overshooting to the east is responsible for a layer of cold air above the high. Observations from eight summer seasons reveal a close relationship between precipitation variability in the Central Andes and the position and intensity of the Bolivian high. The physical mechanisms of this connection are explored using composite, EOF, and correlation techniques. On intraseasonal to interannual timescales, rainy episodes on the Altiplano are found to be associated with warm, moist, poleward flow along the eastern flank of the Andes, often in conjunction with extratropical disturbances and a westward displacement of the SACZ. Corresponding to this

  14. Stochastically Estimating Modular Criticality in Large-Scale Logic Circuits Using Sparsity Regularization and Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Mohammed Alawad

    2015-03-01

    Full Text Available This paper considers the problem of how to efficiently measure a large and complex information field with optimally few observations. Specifically, we investigate how to stochastically estimate modular criticality values in a large-scale digital circuit with a very limited number of measurements in order to minimize the total measurement efforts and time. We prove that, through sparsity-promoting transform domain regularization and by strategically integrating compressive sensing with Bayesian learning, more than 98% of the overall measurement accuracy can be achieved with fewer than 10% of measurements as required in a conventional approach that uses exhaustive measurements. Furthermore, we illustrate that the obtained criticality results can be utilized to selectively fortify large-scale digital circuits for operation with narrow voltage headrooms and in the presence of soft-errors rising at near threshold voltage levels, without excessive hardware overheads. Our numerical simulation results have shown that, by optimally allocating only 10% circuit redundancy, for some large-scale benchmark circuits, we can achieve more than a three-times reduction in its overall error probability, whereas if randomly distributing such 10% hardware resource, less than 2% improvements in the target circuit’s overall robustness will be observed. Finally, we conjecture that our proposed approach can be readily applied to estimate other essential properties of digital circuits that are critical to designing and analyzing them, such as the observability measure in reliability analysis and the path delay estimation in stochastic timing analysis. The only key requirement of our proposed methodology is that these global information fields exhibit a certain degree of smoothness, which is universally true for almost any physical phenomenon.

  15. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  16. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  17. Influences of large-scale convection and moisture source on monthly precipitation isotope ratios observed in Thailand, Southeast Asia

    Science.gov (United States)

    Wei, Zhongwang; Lee, Xuhui; Liu, Zhongfang; Seeboonruang, Uma; Koike, Masahiro; Yoshimura, Kei

    2018-04-01

    Many paleoclimatic records in Southeast Asia rely on rainfall isotope ratios as proxies for past hydroclimatic variability. However, the physical processes controlling modern rainfall isotopic behaviors in the region is poorly constrained. Here, we combined isotopic measurements at six sites across Thailand with an isotope-incorporated atmospheric circulation model (IsoGSM) and the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model to investigate the factors that govern the variability of precipitation isotope ratios in this region. Results show that rainfall isotope ratios are both correlated with local rainfall amount and regional outgoing longwave radiation, suggesting that rainfall isotope ratios in this region are controlled not only by local rain amount (amount effect) but also by large-scale convection. As a transition zone between the Indian monsoon and the western North Pacific monsoon, the spatial difference of observed precipitation isotope among different sites are associated with moisture source. These results highlight the importance of regional processes in determining rainfall isotope ratios in the tropics and provide constraints on the interpretation of paleo-precipitation isotope records in the context of regional climate dynamics.

  18. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    Science.gov (United States)

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  19. Cardinality Estimation Algorithm in Large-Scale Anonymous Wireless Sensor Networks

    KAUST Repository

    Douik, Ahmed

    2017-08-30

    Consider a large-scale anonymous wireless sensor network with unknown cardinality. In such graphs, each node has no information about the network topology and only possesses a unique identifier. This paper introduces a novel distributed algorithm for cardinality estimation and topology discovery, i.e., estimating the number of node and structure of the graph, by querying a small number of nodes and performing statistical inference methods. While the cardinality estimation allows the design of more efficient coding schemes for the network, the topology discovery provides a reliable way for routing packets. The proposed algorithm is shown to produce a cardinality estimate proportional to the best linear unbiased estimator for dense graphs and specific running times. Simulation results attest the theoretical results and reveal that, for a reasonable running time, querying a small group of nodes is sufficient to perform an estimation of 95% of the whole network. Applications of this work include estimating the number of Internet of Things (IoT) sensor devices, online social users, active protein cells, etc.

  20. Rain Characteristics and Large-Scale Environments of Precipitation Objects with Extreme Rain Volumes from TRMM Observations

    Science.gov (United States)

    Zhou, Yaping; Lau, William K M.; Liu, Chuntao

    2013-01-01

    This study adopts a "precipitation object" approach by using 14 years of Tropical Rainfall Measuring Mission (TRMM) Precipitation Feature (PF) and National Centers for Environmental Prediction (NCEP) reanalysis data to study rainfall structure and environmental factors associated with extreme heavy rain events. Characteristics of instantaneous extreme volumetric PFs are examined and compared to those of intermediate and small systems. It is found that instantaneous PFs exhibit a much wider scale range compared to the daily gridded precipitation accumulation range. The top 1% of the rainiest PFs contribute over 55% of total rainfall and have 2 orders of rain volume magnitude greater than those of the median PFs. We find a threshold near the top 10% beyond which the PFs grow exponentially into larger, deeper, and colder rain systems. NCEP reanalyses show that midlevel relative humidity and total precipitable water increase steadily with increasingly larger PFs, along with a rapid increase of 500 hPa upward vertical velocity beyond the top 10%. This provides the necessary moisture convergence to amplify and sustain the extreme events. The rapid increase in vertical motion is associated with the release of convective available potential energy (CAPE) in mature systems, as is evident in the increase in CAPE of PFs up to 10% and the subsequent dropoff. The study illustrates distinct stages in the development of an extreme rainfall event including: (1) a systematic buildup in large-scale temperature and moisture, (2) a rapid change in rain structure, (3) explosive growth of the PF size, and (4) a release of CAPE before the demise of the event.

  1. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  2. A BAYESIAN ESTIMATE OF THE CMB–LARGE-SCALE STRUCTURE CROSS-CORRELATION

    Energy Technology Data Exchange (ETDEWEB)

    Moura-Santos, E. [Instituto de Física, Universidade de São Paulo, Rua do Matão trav. R 187, 05508-090, São Paulo—SP (Brazil); Carvalho, F. C. [Departamento de Física, Universidade do Estado do Rio Grande do Norte, 59610-210, Mossoró-RN (Brazil); Penna-Lima, M. [APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cité, 10, rue Alice Domon et Léonie Duquet, F-75205 Paris Cedex 13 (France); Novaes, C. P.; Wuensche, C. A., E-mail: emoura@if.usp.br, E-mail: fabiocabral@uern.br, E-mail: pennal@apc.in2p3.fr, E-mail: cawuenschel@das.inpe.br, E-mail: camilanovaes@on.br [Observatório Nacional, Rua General José Cristino 77, São Cristóvão, 20921-400, Rio de Janeiro, RJ (Brazil)

    2016-08-01

    Evidences for late-time acceleration of the universe are provided by multiple probes, such as Type Ia supernovae, the cosmic microwave background (CMB), and large-scale structure (LSS). In this work, we focus on the integrated Sachs–Wolfe (ISW) effect, i.e., secondary CMB fluctuations generated by evolving gravitational potentials due to the transition between, e.g., the matter and dark energy (DE) dominated phases. Therefore, assuming a flat universe, DE properties can be inferred from ISW detections. We present a Bayesian approach to compute the CMB–LSS cross-correlation signal. The method is based on the estimate of the likelihood for measuring a combined set consisting of a CMB temperature and galaxy contrast maps, provided that we have some information on the statistical properties of the fluctuations affecting these maps. The likelihood is estimated by a sampling algorithm, therefore avoiding the computationally demanding techniques of direct evaluation in either pixel or harmonic space. As local tracers of the matter distribution at large scales, we used the Two Micron All Sky Survey galaxy catalog and, for the CMB temperature fluctuations, the ninth-year data release of the Wilkinson Microwave Anisotropy Probe ( WMAP 9). The results show a dominance of cosmic variance over the weak recovered signal, due mainly to the shallowness of the catalog used, with systematics associated with the sampling algorithm playing a secondary role as sources of uncertainty. When combined with other complementary probes, the method presented in this paper is expected to be a useful tool to late-time acceleration studies in cosmology.

  3. Large-scale precipitation tracking and the MJO over the Maritime Continent and Indo-Pacific warm pool

    Science.gov (United States)

    Kerns, Brandon W.; Chen, Shuyi S.

    2016-08-01

    A large-scale precipitation tracking (LPT) method is developed to track convection and precipitation associated with the Madden-Julian oscillation (MJO) using the Tropical Rainfall Measurement Mission 3B42 rainfall data from October to March 1998-2015. LPT uses spatially smoothed 3 day rainfall accumulation to identify and track precipitation features in time with a minimum size of 300,000 km2 and time continuity at least 10 days. While not all LPT systems (LPTs) are attributable to the MJO, among the 199 LPTs, there were 42 with a mean eastward propagation of at least 2 m s-1, which are considered to be MJO convective initiation events. These LPTs capture the diversity of the MJO convection, which is not well depicted by the Real-time Multivariate MJO (RMM) index or the outgoing longwave radiation MJO index. During the 17 years, there were 17 instances out of 45 with a MJO signature in the RMM without eastward propagating LPTs. Among the 42 eastward propagating LPTs, 24 propagated across the Maritime Continent (MC), which confirms the MC barrier effect. Among the cases that crossed the MC from the Indian Ocean to the western Pacific (MC crossing), 18 (75%) had a significant MJO signature in the RMM index. In contrast, only six (33%) of the non-MC-crossing cases occurred with a RMM MJO signal. There is a significant seasonal and interannual variability with MC-crossing LPTs occurring in December more commonly than other months. More MC-crossing events were observed during La Niña than El Niño, which is consistent with the observations of stronger and more frequent MJO events identified by RMM during La Niña years.

  4. Two methods for estimating limits to large-scale wind power generation.

    Science.gov (United States)

    Miller, Lee M; Brunsell, Nathaniel A; Mechem, David B; Gans, Fabian; Monaghan, Andrew J; Vautard, Robert; Keith, David W; Kleidon, Axel

    2015-09-08

    Wind turbines remove kinetic energy from the atmospheric flow, which reduces wind speeds and limits generation rates of large wind farms. These interactions can be approximated using a vertical kinetic energy (VKE) flux method, which predicts that the maximum power generation potential is 26% of the instantaneous downward transport of kinetic energy using the preturbine climatology. We compare the energy flux method to the Weather Research and Forecasting (WRF) regional atmospheric model equipped with a wind turbine parameterization over a 10(5) km2 region in the central United States. The WRF simulations yield a maximum generation of 1.1 We⋅m(-2), whereas the VKE method predicts the time series while underestimating the maximum generation rate by about 50%. Because VKE derives the generation limit from the preturbine climatology, potential changes in the vertical kinetic energy flux from the free atmosphere are not considered. Such changes are important at night when WRF estimates are about twice the VKE value because wind turbines interact with the decoupled nocturnal low-level jet in this region. Daytime estimates agree better to 20% because the wind turbines induce comparatively small changes to the downward kinetic energy flux. This combination of downward transport limits and wind speed reductions explains why large-scale wind power generation in windy regions is limited to about 1 We⋅m(-2), with VKE capturing this combination in a comparatively simple way.

  5. Modeling Student Motivation and Students’ Ability Estimates From a Large-Scale Assessment of Mathematics

    Directory of Open Access Journals (Sweden)

    Carlos Zerpa

    2011-09-01

    Full Text Available When large-scale assessments (LSA do not hold personal stakes for students, students may not put forth their best effort. Low-effort examinee behaviors (e.g., guessing, omitting items result in an underestimate of examinee abilities, which is a concern when using results of LSA to inform educational policy and planning. The purpose of this study was to explore the relationship between examinee motivation as defined by expectancy-value theory, student effort, and examinee mathematics abilities. A principal components analysis was used to examine the data from Grade 9 students (n = 43,562 who responded to a self-report questionnaire on their attitudes and practices related to mathematics. The results suggested a two-component model where the components were interpreted as task-values in mathematics and student effort. Next, a hierarchical linear model was implemented to examine the relationship between examinee component scores and their estimated ability on a LSA. The results of this study provide evidence that motivation, as defined by the expectancy-value theory and student effort, partially explains student ability estimates and may have implications in the information that get transferred to testing organizations, school boards, and teachers while assessing students’ Grade 9 mathematics learning.

  6. The influence of control parameter estimation on large scale geomorphological interpretation of pointclouds

    Science.gov (United States)

    Dorninger, P.; Koma, Z.; Székely, B.

    2012-04-01

    either automatically (i.e. estimated of the given data) or manually (i.e. supervised parameter estimation). Additionally, the result might be influenced if data processing is performed locally (i.e. using tiles) or globally. Local processing of the data has the advantages of generally performing faster, having less hardware requirements, and enabling the determination of more detailed information. By contrast, especially in geomorphological interpretation, a global data processing enables determining large scale relations within the dataset analyzed. We investigated the influence of control parameter settings on the geomorphological interpretation on airborne and terrestrial laser scanning data sets of the landslide at Doren (Vorarlberg, Austria), on airborne laser scanning data of the western cordilleras of the central Andes, and on HRSC terrain data of the Mars surface. Topics discussed are the suitability of automated versus manual determination of control parameters, the influence of the definition of the area of interest (local versus global application) as well as computational performance.

  7. Large-scale lysimeter site St. Arnold, Germany: analysis of 40 years of precipitation, leachate and evapotranspiration

    Directory of Open Access Journals (Sweden)

    N. Harsch

    2009-03-01

    Full Text Available This study deals with a lysimetrical-meteorological data series collected on the large-scale lysimeter site "St. Arnold", Germany, from November 1965 to April 2007. The particular relevance of this data rests both upon its perdurability and upon the fact that the site is comprised of a grassland basin, an oak/beech and a pine basin.

    Apart from analyzing long term trends of the meteorological measurements, the primary objective of this study is to investigate the water balance in grassland and forested basins, in particular comparing the precipitation term to leachate quantities and potential and actual evapotranspiration. The latter are based upon the Penman and the Penman-Monteith approaches, respectively.

    The main results of this survey are that, on a long-term average, the grassland basin turns more than half (53% of its annually incoming precipitation into leachate and only 36% into water vapour, while the deciduous forest exhibits a ratio of 37% for leachate and 56% for evapotranspiration, and the evergreen coniferous forest shows the highest evaporation rate (65% and the lowest leachate rate (26%.

    Concerning these water balances, considerable differences both between basins and between seasons stand out. While summer periods exhibit high evapotranspiration rates for the forests and moderate ones for the grassland, winter periods are characterised by considerable leachate quantities for grassland and the deciduous forest and moderate ones for the coniferous forest. Following the analysis of the climatic development in St. Arnold, trends towards a milder and more humid regional climate were detected.

  8. Dominant Large-Scale Atmospheric Circulation Systems for the Extreme Precipitation over the Western Sichuan Basin in Summer 2013

    Directory of Open Access Journals (Sweden)

    Yamin Hu

    2015-01-01

    Full Text Available The western Sichuan Basin (WSB is a rainstorm center influenced by complicated factors such as topography and circulation. Based on multivariable empirical orthogonal function technique for extreme precipitation processes (EPP in WSB in 2013, this study reveals the dominant circulation patterns. Results indicate that the leading modes are characterized by “Saddle” and “Sandwich” structures, respectively. In one mode, a TC from the South China Sea (SCS converts into the inverted trough and steers warm moist airflow northward into the WSB. At the same time, WPSH extends westward over the Yangtze River and conveys a southeasterly warm humid flow. In the other case, WPSH is pushed westward by TC in the Western Pacific and then merges with an anomalous anticyclone over SCS. The anomalous anticyclone and WPSH form a conjunction belt and convey the warm moist southwesterly airflow to meet with the cold flow over the WSB. The configurations of WPSH and TC in the tropic and the blocking and trough in the midhigh latitudes play important roles during the EPPs over the WSB. The persistence of EPPs depends on the long-lived large-scale circulation configuration steady over the suitable positions.

  9. Auroral electrojet dynamics during magnetic storms, connection with plasma precipitation and large-scale structure of the magnetospheric magnetic field

    Directory of Open Access Journals (Sweden)

    Y. I. Feldstein

    1999-04-01

    magnetospheric magnetic field paraboloid model the influence of the ring current and magnetospheric tail plasma sheet currents on large-scale magnetosphere structure is considered.Key words. Ionosphere (particle precipitation · Magnetospheric physics (current systems; magnetospheric configuration and dynamics.

  10. LARGE-SCALE MAGNETIC HELICITY FLUXES ESTIMATED FROM MDI MAGNETIC SYNOPTIC CHARTS OVER THE SOLAR CYCLE 23

    International Nuclear Information System (INIS)

    Yang Shangbin; Zhang Hongqi

    2012-01-01

    To investigate the characteristics of large-scale and long-term evolution of magnetic helicity with solar cycles, we use the method of Local Correlation Tracking to estimate the magnetic helicity evolution over solar cycle 23 from 1996 to 2009 using 795 MDI magnetic synoptic charts. The main results are as follows: the hemispheric helicity rule still holds in general, i.e., the large-scale negative (positive) magnetic helicity dominates the northern (southern) hemisphere. However, the large-scale magnetic helicity fluxes show the same sign in both hemispheres around 2001 and 2005. The global, large-scale magnetic helicity flux over the solar disk changes from a negative value at the beginning of solar cycle 23 to a positive value at the end of the cycle, while the net accumulated magnetic helicity is negative in the period between 1996 and 2009.

  11. LARGE-SCALE MAGNETIC HELICITY FLUXES ESTIMATED FROM MDI MAGNETIC SYNOPTIC CHARTS OVER THE SOLAR CYCLE 23

    Energy Technology Data Exchange (ETDEWEB)

    Yang Shangbin; Zhang Hongqi, E-mail: yangshb@nao.cas.cn [Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, 100012 Beijing (China)

    2012-10-10

    To investigate the characteristics of large-scale and long-term evolution of magnetic helicity with solar cycles, we use the method of Local Correlation Tracking to estimate the magnetic helicity evolution over solar cycle 23 from 1996 to 2009 using 795 MDI magnetic synoptic charts. The main results are as follows: the hemispheric helicity rule still holds in general, i.e., the large-scale negative (positive) magnetic helicity dominates the northern (southern) hemisphere. However, the large-scale magnetic helicity fluxes show the same sign in both hemispheres around 2001 and 2005. The global, large-scale magnetic helicity flux over the solar disk changes from a negative value at the beginning of solar cycle 23 to a positive value at the end of the cycle, while the net accumulated magnetic helicity is negative in the period between 1996 and 2009.

  12. Coarse-Grain Bandwidth Estimation Scheme for Large-Scale Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther H.; Sergui, John S.

    2013-01-01

    A large-scale network that supports a large number of users can have an aggregate data rate of hundreds of Mbps at any time. High-fidelity simulation of a large-scale network might be too complicated and memory-intensive for typical commercial-off-the-shelf (COTS) tools. Unlike a large commercial wide-area-network (WAN) that shares diverse network resources among diverse users and has a complex topology that requires routing mechanism and flow control, the ground communication links of a space network operate under the assumption of a guaranteed dedicated bandwidth allocation between specific sparse endpoints in a star-like topology. This work solved the network design problem of estimating the bandwidths of a ground network architecture option that offer different service classes to meet the latency requirements of different user data types. In this work, a top-down analysis and simulation approach was created to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. These techniques were used to estimate the WAN bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network. A new analytical approach, called the "leveling scheme," was developed to model the store-and-forward mechanism of the network data flow. The term "leveling" refers to the spreading of data across a longer time horizon without violating the corresponding latency requirement of the data type. Two versions of the leveling scheme were developed: 1. A straightforward version that simply spreads the data of each data type across the time horizon and doesn't take into account the interactions among data types within a pass, or between data types across overlapping passes at a network node, and is inherently sub-optimal. 2. Two-state Markov leveling scheme that takes into account the second order behavior of

  13. Feasibility analysis of using inverse modeling for estimating natural groundwater recharge from a large-scale soil moisture monitoring network

    Science.gov (United States)

    Wang, Tiejun; Franz, Trenton E.; Yue, Weifeng; Szilagyi, Jozsef; Zlotnik, Vitaly A.; You, Jinsheng; Chen, Xunhong; Shulski, Martha D.; Young, Aaron

    2016-02-01

    Despite the importance of groundwater recharge (GR), its accurate estimation still remains one of the most challenging tasks in the field of hydrology. In this study, with the help of inverse modeling, long-term (6 years) soil moisture data at 34 sites from the Automated Weather Data Network (AWDN) were used to estimate the spatial distribution of GR across Nebraska, USA, where significant spatial variability exists in soil properties and precipitation (P). To ensure the generality of this study and its potential broad applications, data from public domains and literature were used to parameterize the standard Hydrus-1D model. Although observed soil moisture differed significantly across the AWDN sites mainly due to the variations in P and soil properties, the simulations were able to capture the dynamics of observed soil moisture under different climatic and soil conditions. The inferred mean annual GR from the calibrated models varied over three orders of magnitude across the study area. To assess the uncertainties of the approach, estimates of GR and actual evapotranspiration (ETa) from the calibrated models were compared to the GR and ETa obtained from other techniques in the study area (e.g., remote sensing, tracers, and regional water balance). Comparison clearly demonstrated the feasibility of inverse modeling and large-scale (>104 km2) soil moisture monitoring networks for estimating GR. In addition, the model results were used to further examine the impacts of climate and soil on GR. The data showed that both P and soil properties had significant impacts on GR in the study area with coarser soils generating higher GR; however, different relationships between GR and P emerged at the AWDN sites, defined by local climatic and soil conditions. In general, positive correlations existed between annual GR and P for the sites with coarser-textured soils or under wetter climatic conditions. With the rapidly expanding soil moisture monitoring networks around the

  14. Spatially-explicit estimation of geographical representation in large-scale species distribution datasets.

    Science.gov (United States)

    Kalwij, Jesse M; Robertson, Mark P; Ronk, Argo; Zobel, Martin; Pärtel, Meelis

    2014-01-01

    Much ecological research relies on existing multispecies distribution datasets. Such datasets, however, can vary considerably in quality, extent, resolution or taxonomic coverage. We provide a framework for a spatially-explicit evaluation of geographical representation within large-scale species distribution datasets, using the comparison of an occurrence atlas with a range atlas dataset as a working example. Specifically, we compared occurrence maps for 3773 taxa from the widely-used Atlas Florae Europaeae (AFE) with digitised range maps for 2049 taxa of the lesser-known Atlas of North European Vascular Plants. We calculated the level of agreement at a 50-km spatial resolution using average latitudinal and longitudinal species range, and area of occupancy. Agreement in species distribution was calculated and mapped using Jaccard similarity index and a reduced major axis (RMA) regression analysis of species richness between the entire atlases (5221 taxa in total) and between co-occurring species (601 taxa). We found no difference in distribution ranges or in the area of occupancy frequency distribution, indicating that atlases were sufficiently overlapping for a valid comparison. The similarity index map showed high levels of agreement for central, western, and northern Europe. The RMA regression confirmed that geographical representation of AFE was low in areas with a sparse data recording history (e.g., Russia, Belarus and the Ukraine). For co-occurring species in south-eastern Europe, however, the Atlas of North European Vascular Plants showed remarkably higher richness estimations. Geographical representation of atlas data can be much more heterogeneous than often assumed. Level of agreement between datasets can be used to evaluate geographical representation within datasets. Merging atlases into a single dataset is worthwhile in spite of methodological differences, and helps to fill gaps in our knowledge of species distribution ranges. Species distribution

  15. Genomic divergences among cattle, dog and human estimated from large-scale alignments of genomic sequences

    Directory of Open Access Journals (Sweden)

    Shade Larry L

    2006-06-01

    Full Text Available Abstract Background Approximately 11 Mb of finished high quality genomic sequences were sampled from cattle, dog and human to estimate genomic divergences and their regional variation among these lineages. Results Optimal three-way multi-species global sequence alignments for 84 cattle clones or loci (each >50 kb of genomic sequence were constructed using the human and dog genome assemblies as references. Genomic divergences and substitution rates were examined for each clone and for various sequence classes under different functional constraints. Analysis of these alignments revealed that the overall genomic divergences are relatively constant (0.32–0.37 change/site for pairwise comparisons among cattle, dog and human; however substitution rates vary across genomic regions and among different sequence classes. A neutral mutation rate (2.0–2.2 × 10(-9 change/site/year was derived from ancestral repetitive sequences, whereas the substitution rate in coding sequences (1.1 × 10(-9 change/site/year was approximately half of the overall rate (1.9–2.0 × 10(-9 change/site/year. Relative rate tests also indicated that cattle have a significantly faster rate of substitution as compared to dog and that this difference is about 6%. Conclusion This analysis provides a large-scale and unbiased assessment of genomic divergences and regional variation of substitution rates among cattle, dog and human. It is expected that these data will serve as a baseline for future mammalian molecular evolution studies.

  16. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  17. Estimating the Effectiveness of Special Education Using Large-Scale Assessment Data

    Science.gov (United States)

    Ewing, Katherine Anne

    2009-01-01

    The inclusion of students with disabilities in large scale assessment and accountability programs has provided new opportunities to examine the impact of special education services on student achievement. Hanushek, Kain, and Rivkin (1998, 2002) evaluated the effectiveness of special education programs by examining students' gains on a large-scale…

  18. Large-scale Estimates of Leaf Area Index from Active Remote Sensing Laser Altimetry

    Science.gov (United States)

    Hopkinson, C.; Mahoney, C.

    2016-12-01

    Leaf area index (LAI) is a key parameter that describes the spatial distribution of foliage within forest canopies which in turn control numerous relationships between the ground, canopy, and atmosphere. The retrieval of LAI has demonstrated success by in-situ (digital) hemispherical photography (DHP) and airborne laser scanning (ALS) data; however, field and ALS acquisitions are often spatially limited (100's km2) and costly. Large-scale (>1000's km2) retrievals have been demonstrated by optical sensors, however, accuracies remain uncertain due to the sensor's inability to penetrate the canopy. The spaceborne Geoscience Laser Altimeter System (GLAS) provides a possible solution in retrieving large-scale derivations whilst simultaneously penetrating the canopy. LAI retrieved by multiple DHP from 6 Australian sites, representing a cross-section of Australian ecosystems, were employed to model ALS LAI, which in turn were used to infer LAI from GLAS data at 5 other sites. An optimally filtered GLAS dataset was then employed in conjunction with a host of supplementary data to build a Random Forest (RF) model to infer predictions (and uncertainties) of LAI at a 250 m resolution across the forested regions of Australia. Predictions were validated against ALS-based LAI from 20 sites (R2=0.64, RMSE=1.1 m2m-2); MODIS-based LAI were also assessed against these sites (R2=0.30, RMSE=1.78 m2m-2) to demonstrate the strength of GLAS-based predictions. The large-scale nature of current predictions was also leveraged to demonstrate large-scale relationships of LAI with other environmental characteristics, such as: canopy height, elevation, and slope. The need for such wide-scale quantification of LAI is key in the assessment and modification of forest management strategies across Australia. Such work also assists Australia's Terrestrial Ecosystem Research Network, in fulfilling their government issued mandates.

  19. The shared and unique values of optical, fluorescence, thermal and microwave satellite data for estimating large-scale crop yields

    Science.gov (United States)

    Large-scale crop monitoring and yield estimation are important for both scientific research and practical applications. Satellite remote sensing provides an effective means for regional and global cropland monitoring, particularly in data-sparse regions that lack reliable ground observations and rep...

  20. Estimating Amazonian rainforest stability and the likelihood for large-scale forest dieback

    Science.gov (United States)

    Rammig, Anja; Thonicke, Kirsten; Jupp, Tim; Ostberg, Sebastian; Heinke, Jens; Lucht, Wolfgang; Cramer, Wolfgang; Cox, Peter

    2010-05-01

    Annually, tropical forests process approximately 18 Pg of carbon through respiration and photosynthesis - more than twice the rate of anthropogenic fossil fuel emissions. Current climate change may be transforming this carbon sink into a carbon source by changing forest structure and dynamics. Increasing temperatures and potentially decreasing precipitation and thus prolonged drought stress may lead to increasing physiological stress and reduced productivity for trees. Resulting decreases in evapotranspiration and therefore convective precipitation could further accelerate drought conditions and destabilize the tropical ecosystem as a whole and lead to an 'Amazon forest dieback'. The projected direction and intensity of climate change vary widely within the region and between different scenarios from climate models (GCMs). In the scope of a World Bank-funded study, we assessed the 24 General Circulation Models (GCMs) evaluated in the 4th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC-AR4) with respect to their capability to reproduce present-day climate in the Amazon basin using a Bayesian approach. With this approach, greater weight is assigned to the models that simulate well the annual cycle of rainfall. We then use the resulting weightings to create probability density functions (PDFs) for future forest biomass changes as simulated by the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJmL) to estimate the risk of potential Amazon rainforest dieback. Our results show contrasting changes in forest biomass throughout five regions of northern South America: If photosynthetic capacity and water use efficiency is enhanced by CO2, biomass increases across all five regions. However, if CO2-fertilisation is assumed to be absent or less important, then substantial dieback occurs in some scenarios and thus, the risk of forest dieback is considerably higher. Particularly affected are regions in the central Amazon basin. The range of

  1. Satellite precipitation estimation over the Tibetan Plateau

    Science.gov (United States)

    Porcu, F.; Gjoka, U.

    2012-04-01

    Precipitation characteristics over the Tibetan Plateau are very little known, given the scarcity of reliable and widely distributed ground observation, thus the satellite approach is a valuable choice for large scale precipitation analysis and hydrological cycle studies. However,the satellite perspective undergoes various shortcomings at the different wavelengths used in atmospheric remote sensing. In the microwave spectrum often the high soil emissivity masks or hides the atmospheric signal upwelling from light-moderate precipitation layers, while low and relatively thin precipitating clouds are not well detected in the visible-infrared, because of their low contrast with cold and bright (if snow covered) background. In this work an IR-based, statistical rainfall estimation technique is trained and applied over the Tibetan Plateau hydrological basin to retrive precipitation intensity at different spatial and temporal scales. The technique is based on a simple artificial neural network scheme trained with two supervised training sets assembled for monsoon season and for the rest of the year. For the monsoon season (estimated from June to September), the ground radar precipitation data for few case studies are used to build the training set: four days in summer 2009 are considered. For the rest of the year, CloudSat-CPR derived snowfall rate has been used as reference precipitation data, following the Kulie and Bennartz (2009) algorithm. METEOSAT-7 infrared channels radiance (at 6.7 and 11 micometers) and derived local variability features (such as local standard deviation and local average) are used as input and the actual rainrate is obtained as output for each satellite slot, every 30 minutes on the satellite grid. The satellite rainrate maps for three years (2008-2010) are computed and compared with available global precipitation products (such as C-MORPH and TMPA products) and with other techniques applied to the Plateau area: similarities and differences are

  2. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    Science.gov (United States)

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscal...

  3. Cardinality Estimation Algorithm in Large-Scale Anonymous Wireless Sensor Networks

    KAUST Repository

    Douik, Ahmed; Aly, Salah A.; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2017-01-01

    for cardinality estimation and topology discovery, i.e., estimating the number of node and structure of the graph, by querying a small number of nodes and performing statistical inference methods. While the cardinality estimation allows the design of more

  4. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    Science.gov (United States)

    Lejiang Yu; Shiyuan Zhong; Lisi Pei; Xindi (Randy) Bian; Warren E. Heilman

    2016-01-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for...

  5. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    Science.gov (United States)

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  6. Climate change scenarios of convective and large-scale precipitation in the Czech Republic based on EURO-CORDEX data

    Czech Academy of Sciences Publication Activity Database

    Rulfová, Zuzana; Beranová, Romana; Kyselý, Jan

    2017-01-01

    Roč. 37, č. 5 (2017), s. 2451-2465 ISSN 0899-8418 R&D Projects: GA ČR(CZ) GA14-18675S Institutional support: RVO:68378289 Keywords : convective precipitation * stratiform precipitation * regional climate models * climate change * EURO-CORDEX * Central Europe Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Climatic research Impact factor: 3.760, year: 2016 http://onlinelibrary.wiley.com/wol1/doi/10.1002/joc.4857/abstract

  7. Contribution of large-scale circulation anomalies to changes in extreme precipitation frequency in the United States

    International Nuclear Information System (INIS)

    Yu, Lejiang; Zhong, Shiyuan; Pei, Lisi; Bian, Xindi; Heilman, Warren E

    2016-01-01

    The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for severe flooding over a large region, little is known about how extreme precipitation events that cause flash flooding and occur at sub-daily time scales have changed over time. Here we use the observed hourly precipitation from the North American Land Data Assimilation System Phase 2 forcing datasets to determine trends in the frequency of extreme precipitation events of short (1 h, 3 h, 6 h, 12 h and 24 h) duration for the period 1979–2013. The results indicate an increasing trend in the central and eastern US. Over most of the western US, especially the Southwest and the Intermountain West, the trends are generally negative. These trends can be largely explained by the interdecadal variability of the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation (AMO), with the AMO making a greater contribution to the trends in both warm and cold seasons. (letter)

  8. Large-scale Vertical Motions, Intensity Change and Precipitation Associated with Land falling Hurricane Katrina over the Gulf of Mexico

    Science.gov (United States)

    Reddy, S. R.; Kwembe, T.; Zhang, Z.

    2016-12-01

    We investigated the possible relationship between the large- scale heat fluxes and intensity change associated with the landfall of Hurricane Katrina. After reaching the category 5 intensity on August 28th , 2005 over the central Gulf of Mexico, Katrina weekend to category 3 before making landfall (August 29th , 2005) on the Louisiana coast with the maximum sustained winds of over 110 knots. We also examined the vertical motions associated with the intensity change of the hurricane. The data for Convective Available Potential Energy for water vapor (CAPE), sea level pressure and wind speed were obtained from the Atmospheric Soundings, and NOAA National Hurricane Center (NHC), respectively for the period August 24 to September 3, 2005. We also computed vertical motions using CAPE values. The study showed that the large-scale heat fluxes reached maximum (7960W/m2) with the central pressure 905mb. The Convective Available Potential Energy and the vertical motions peaked 3-5 days before landfall. The large atmospheric vertical motions associated with the land falling hurricane Katrina produced severe weather including thunderstorm, tornadoes, storm surge and floods Numerical model (WRF/ARW) with data assimilations have been used for this research to investigate the model's performances on hurricane tracks and intensities associated with the hurricane Katrina, which began to strengthen until reaching Category 5 on 28 August 2005. The model was run on a doubly nested domain centered over the central Gulf of Mexico, with grid spacing of 90 km and 30 km for 6 hr periods, from August 28th to August 30th. The model output was compared with the observations and is capable of simulating the surface features, intensity change and track associated with hurricane Katrina.

  9. Implicit Particle Filter for Power System State Estimation with Large Scale Renewable Power Integration.

    Science.gov (United States)

    Uzunoglu, B.; Hussaini, Y.

    2017-12-01

    Implicit Particle Filter is a sequential Monte Carlo method for data assimilation that guides the particles to the high-probability by an implicit step . It optimizes a nonlinear cost function which can be inherited from legacy assimilation routines . Dynamic state estimation for almost real-time applications in power systems are becomingly increasingly more important with integration of variable wind and solar power generation. New advanced state estimation tools that will replace the old generation state estimation in addition to having a general framework of complexities should be able to address the legacy software and able to integrate the old software in a mathematical framework while allowing the power industry need for a cautious and evolutionary change in comparison to a complete revolutionary approach while addressing nonlinearity and non-normal behaviour. This work implements implicit particle filter as a state estimation tool for the estimation of the states of a power system and presents the first implicit particle filter application study on a power system state estimation. The implicit particle filter is introduced into power systems and the simulations are presented for a three-node benchmark power system . The performance of the filter on the presented problem is analyzed and the results are presented.

  10. What causes differences between national estimates of forest management carbon emissions and removals compared to estimates of large - scale models?

    NARCIS (Netherlands)

    Groen, T.A.; Verkerk, P.J.; Böttcher, H.; Grassi, G.; Cienciala, E.; Black, K.G.; Fortin, M.; Köthke, M.; Lehtonen, A.; Nabuurs, G.J; Petrova, L.; Blujdea, V.

    2013-01-01

    Under the United Nations Framework Convention for Climate Change all Parties have to report on carbon emissions and removals from the forestry sector. Each Party can use its own approach and country specific data for this. Independently, large-scale models exist (e.g. EFISCEN and G4M as used in this

  11. Large-scale fluid motion in the earth's outer core estimated from non-dipole magnetic field data

    International Nuclear Information System (INIS)

    Matsushima, Masaki; Honkura, Yoshimori

    1989-01-01

    Fluid motions in the Earth's outer core can be estimated from magnetic field data at the Earth's surface based on some assumptions. The basic standpoint here is that the non-dipole magnetic field is generated by the interaction between a strong toroidal magnetic field, created by differential rotation, and the convective motion in the outer core. Large-scale convective motions are studied to express them in terms of the poloidal velocity field expanded into a series of spherical harmonics. The radial distribution of differential rotation is estimated from the balance between the effective couple due to angular momentum transfer and the electromagnetic couple. Then the radial dependence of the toroidal magnetic field is derived from the interaction between the differential rotation thus estimated and the dipole magnetic field within the outer core. Magnetic field data are applied to a secular variation model which takes into account the fluctuations of the standing and drifting parts of the non-zonal magnetic field. The velocity field in the outer core is estimated for two cases. It is revealed that the pattern of convective motions is generally characterized by large-scale motions in the quasi-steady case. In the non-steady case, the magnitude of the velocity field is much larger, indicating a more dynamic feature. (N.K.)

  12. Estimation of the cost of large-scale school deworming programmes with benzimidazoles.

    Science.gov (United States)

    Montresor, A; Gabrielli, A F; Diarra, A; Engels, D

    2010-02-01

    This study estimates the cost of distributing benzimidazole tablets in the context of school deworming programmes: we analysed studies reporting the cost of school deworming from seven countries in four WHO regions. The estimated cost for drug procurement to cover one million children (including customs clearance and international transport) is approximately US$20000. The estimated financial costs (including the cost of training of personnel, drug transport, social mobilization and monitoring) is, on average, equivalent to US$33000 per million school-age children with minimal variation in different countries and continents. The estimated economic costs of distribution (including the time spent by teachers, and health personnel at central, provincial and district level) to cover one million children approximately corresponds to US$19000. This study shows the minimal cost of school deworming activities, but also shows the significant contribution (corresponding to a quarter of the entire cost of the programme) provided by health and education systems in endemic countries even in the case of drug donations and donor support of distribution costs. Copyright 2009 Royal Society of Tropical Medicine and Hygiene. Published by Elsevier Ltd. All rights reserved.

  13. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  14. Estimation of the cost of large-scale school deworming programmes with benzimidazoles

    Science.gov (United States)

    Montresor, A.; Gabrielli, A.F.; Engels, D.

    2017-01-01

    Summary This study estimates the cost of distributing benzimidazole tablets in the context of school deworming programmes: we analysed studies reporting the cost of school deworming from seven countries in four WHO regions. The estimated cost for drug procurement to cover one million children (including customs clearance and international transport) is approximately US$20 000. The estimated financial costs (including the cost of training of personnel, drug transport, social mobilization and monitoring) is, on average, equivalent to US$33 000 per million school-age children with minimal variation in different countries and continents. The estimated economic costs of distribution (including the time spent by teachers, and health personnel at central, provincial and district level) to cover one million children approximately corresponds to US$19 000. This study shows the minimal cost of school deworming activities, but also shows the significant contribution (corresponding to a quarter of the entire cost of the programme) provided by health and education systems in endemic countries even in the case of drug donations and donor support of distribution costs. PMID:19926104

  15. Assimilating Non-linear Effects of Customized Large-Scale Climate Predictors on Downscaled Precipitation over the Tropical Andes

    Science.gov (United States)

    Molina, J. M.; Zaitchik, B. F.

    2016-12-01

    Recent findings considering high CO2 emission scenarios (RCP8.5) suggest that the tropical Andes may experience a massive warming and a significant precipitation increase (decrease) during the wet (dry) seasons by the end of the 21st century. Variations on rainfall-streamflow relationships and seasonal crop yields significantly affect human development in this region and make local communities highly vulnerable to climate change and variability. We developed an expert-informed empirical statistical downscaling (ESD) algorithm to explore and construct robust global climate predictors to perform skillful RCP8.5 projections of in-situ March-May (MAM) precipitation required for impact modeling and adaptation studies. We applied our framework to a topographically-complex region of the Colombian Andes where a number of previous studies have reported El Niño-Southern Oscillation (ENSO) as the main driver of climate variability. Supervised machine learning algorithms were trained with customized and bias-corrected predictors from NCEP reanalysis, and a cross-validation approach was implemented to assess both predictive skill and model selection. We found weak and not significant teleconnections between precipitation and lagged seasonal surface temperatures over El Niño3.4 domain, which suggests that ENSO fails to explain MAM rainfall variability in the study region. In contrast, series of Sea Level Pressure (SLP) over American Samoa -likely associated with the South Pacific Convergence Zone (SPCZ)- explains more than 65% of the precipitation variance. The best prediction skill was obtained with Selected Generalized Additive Models (SGAM) given their ability to capture linear/nonlinear relationships present in the data. While SPCZ-related series exhibited a positive linear effect in the rainfall response, SLP predictors in the north Atlantic and central equatorial Pacific showed nonlinear effects. A multimodel (MIROC, CanESM2 and CCSM) ensemble of ESD projections revealed

  16. Estimation of the outlooks for large-scale transmutation of fission-produced iodine

    CERN Document Server

    Galkin, B Y; Kolyadin, A B; Kocherov, N P; Lyubtsev, R I; Hosov, A A; Rimskij-Korsakov, A A

    2002-01-01

    To obtain data necessary for estimating sup 1 sup 2 sup 9 I transmutation efficiency in nuclear reactors the effective neutron capture cross section on sup 1 sup 2 sup 9 I isotope in neutral spectrum of the WWR-M reactor was determined. The calculated value of sup 1 sup 2 sup 9 I capture cross section, averaged by neutron spectrum in beryllium reflector of the WWR-M reactor, made up 17.8+-3.2 barn. On the basis of experimental data and estimations it was shown that in neutron flux 10 sup 1 sup 4 1/(cm sup 2 s) transmutation of iodine -129 loaded in the course of one year can amount to approximately 25%

  17. Coarse-grain bandwidth estimation techniques for large-scale network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, E.

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  18. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  19. Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean

    Science.gov (United States)

    Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.

    2018-02-01

    The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.

  20. Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora

    Directory of Open Access Journals (Sweden)

    Ryosuke Takahira

    2016-10-01

    Full Text Available One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese, to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis.

  1. Robust node estimation and topology discovery for large-scale networks

    KAUST Repository

    Alouini, Mohamed-Slim

    2017-02-23

    Various examples are provided for node estimation and topology discovery for networks. In one example, a method includes receiving a packet having an identifier from a first node; adding the identifier to another transmission packet based on a comparison between the first identifier and existing identifiers associated with the other packet; adjusting a transmit probability based on the comparison; and transmitting the other packet based on a comparison between the transmit probability and a probability distribution. In another example, a system includes a network device that can adds an identifier received in a packet to a list including existing identifiers and adjust a transmit probability based on a comparison between the identifiers; and transmit another packet based on a comparison between the transmit probability and a probability distribution. In another example, a method includes determining a quantity of sensor devices based on a plurality of identifiers received in a packet.

  2. Robust node estimation and topology discovery for large-scale networks

    KAUST Repository

    Alouini, Mohamed-Slim; Douik, Ahmed S.; Aly, Salah A.; Al-Naffouri, Tareq Y.

    2017-01-01

    Various examples are provided for node estimation and topology discovery for networks. In one example, a method includes receiving a packet having an identifier from a first node; adding the identifier to another transmission packet based on a comparison between the first identifier and existing identifiers associated with the other packet; adjusting a transmit probability based on the comparison; and transmitting the other packet based on a comparison between the transmit probability and a probability distribution. In another example, a system includes a network device that can adds an identifier received in a packet to a list including existing identifiers and adjust a transmit probability based on a comparison between the identifiers; and transmit another packet based on a comparison between the transmit probability and a probability distribution. In another example, a method includes determining a quantity of sensor devices based on a plurality of identifiers received in a packet.

  3. Estimating demographic parameters from large-scale population genomic data using Approximate Bayesian Computation

    Directory of Open Access Journals (Sweden)

    Li Sen

    2012-03-01

    Full Text Available Abstract Background The Approximate Bayesian Computation (ABC approach has been used to infer demographic parameters for numerous species, including humans. However, most applications of ABC still use limited amounts of data, from a small number of loci, compared to the large amount of genome-wide population-genetic data which have become available in the last few years. Results We evaluated the performance of the ABC approach for three 'population divergence' models - similar to the 'isolation with migration' model - when the data consists of several hundred thousand SNPs typed for multiple individuals by simulating data from known demographic models. The ABC approach was used to infer demographic parameters of interest and we compared the inferred values to the true parameter values that was used to generate hypothetical "observed" data. For all three case models, the ABC approach inferred most demographic parameters quite well with narrow credible intervals, for example, population divergence times and past population sizes, but some parameters were more difficult to infer, such as population sizes at present and migration rates. We compared the ability of different summary statistics to infer demographic parameters, including haplotype and LD based statistics, and found that the accuracy of the parameter estimates can be improved by combining summary statistics that capture different parts of information in the data. Furthermore, our results suggest that poor choices of prior distributions can in some circumstances be detected using ABC. Finally, increasing the amount of data beyond some hundred loci will substantially improve the accuracy of many parameter estimates using ABC. Conclusions We conclude that the ABC approach can accommodate realistic genome-wide population genetic data, which may be difficult to analyze with full likelihood approaches, and that the ABC can provide accurate and precise inference of demographic parameters from

  4. Total cost estimates for large-scale wind scenarios in UK

    International Nuclear Information System (INIS)

    Dale, Lewis; Milborrow, David; Slark, Richard; Strbac, Goran

    2004-01-01

    The recent UK Energy White Paper suggested that the Government should aim to secure 20% of electricity from renewable sources by 2020. A number of estimates of the extra cost of such a commitment have been made, but these have not necessarily included all the relevant cost components. This analysis sets out to identify these and to calculate the extra cost to the electricity consumer, assuming all the renewable electricity is sourced from wind energy. This enables one of the more controversial issues--the implications of wind intermittency--to be addressed. The basis of the assumptions associated with generating costs, extra balancing costs and distribution and transmission system reinforcement costs are all clearly identified and the total costs of a '20% wind' scenario are compared with a scenario where a similar amount of energy is generated by gas-fired plant. This enables the extra costs of the renewables scenario to be determined. The central estimate of the extra costs to electricity consumers is just over 0.3 p/kW h in current prices (around 5% extra on average domestic unit prices). Sensitivity analyses examine the implications of differing assumptions. The extra cost would rise if the capital costs of wind generation fall slower than anticipated, but would fall if gas prices rise more rapidly than has been assumed, or if wind plant are more productive. Even if it is assumed that wind has no capacity displacement value, the added cost to the electricity consumer rises by less than 0.1 p/kW h. It is concluded that there does not appear to be any technical reason why a substantial proportion of the country's electricity requirements could not be delivered by wind

  5. Reconstructions of spring/summer precipitation for the Eastern Mediterranean from tree-ring widths and its connection to large-scale atmospheric circulation

    Energy Technology Data Exchange (ETDEWEB)

    Touchan, Ramzi; Funkhouser, Gary; Hughes, Malcolm K. [The University of Arizona, Laboratory of Tree-Ring Research, Tucson, AZ (United States); Xoplaki, Elena; Luterbacher, Juerg [University of Bern, Institute of Geography and NCCR Climate, Bern (Switzerland); Erkan, Nesat [Southwest Anatolia Forest Research Institute (SAFRI), Antalya (Turkey); Akkemik, Uenal [University of Istanbul, Faculty of Forestry, Department of Forest Botany, Bahcekoey-Istanbul (Turkey); Stephan, Jean [Ministry of Agriculture, Forestry Department, Beirut (Lebanon)

    2005-07-01

    This study represents the first large-scale systematic dendroclimatic sampling focused on developing chronologies from different species in the eastern Mediterranean region. Six reconstructions were developed from chronologies ranging in length from 115 years to 600 years. The first reconstruction (1885-2000) was derived from principal components (PCs) of 36 combined chronologies. The remaining five, 1800-2000, 1700-2000, 1600-2000, 1500-2000 and 1400-2000 were developed from PCs of 32, 18, 14, 9, and 7 chronologies, respectively. Calibration and verification statistics for the period 1931-2000 show good levels of skill for all reconstructions. The longest period of consecutive dry years, defined as those with less than 90% of the mean of the observed May-August precipitation, was 5 years (1591-1595) and occurred only once during the last 600 years. The longest reconstructed wet period was 5 years (1601-1605 and 1751-1755). No long term trends were found in May-August precipitation during the last few centuries. Regression maps are used to identify the influence of large-scale atmospheric circulation on regional precipitation. In general, tree-ring indices are influenced by May-August precipitation, which is driven by anomalous below (above) normal pressure at all atmospheric levels and by convection (subsidence) and small pressure gradients at sea level. These atmospheric conditions also control the anomaly surface air temperature distribution which indicates below (above) normal values in the southern regions and warmer (cooler) conditions north of around 40 N. A compositing technique is used to extract information on large-scale climate signals from extreme wet and dry summers for the second half of the twentieth century and an independent reconstruction over the last 237 years. Similar main modes of atmospheric patterns and surface air temperature distribution related to extreme dry and wet summers were identified both for the most recent 50 years and the last

  6. Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps

    Science.gov (United States)

    Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer W.; Harmon, Mark E.; Hoffman, Forrest; Kumar, Jitendra; McGuire, Anthony David; Vargas, Rodrigo

    2016-01-01

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of “Decomposition Functional Types” (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelers and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.

  7. Spring soil moisture-precipitation feedback in the Southern Great Plains: How is it related to large-scale atmospheric conditions?

    KAUST Repository

    Su, Hua

    2014-02-22

    The Southern Great Plains (SGP) has been shown as a region of significant soil moisture-precipitation (S-P) coupling. However, how strong evapotranspiration (ET) can affect regional precipitation remains largely unclear, impeding a full grasp of the S-P feedback in that area. The current study seeks to unravel, in a spring month (April), the potential role played by large-scale atmospheric conditions in shaping S (ET)-P feedback. Our regional climate modeling experiments demonstrate that the presence of anomalous low (high) pressure and cyclonic (anticyclonic) flows at the upper/middle troposphere over the relevant areas is associated with strongest (minimum) positive S-P feedback in the SGP. Their impacts are interpreted in terms of large-scale atmospheric dynamical disturbance, including the intensity and location of synoptic eddies. Further analyses of the vertical velocity fields corroborate these interpretations. In addition, the relationship between lower tropospheric moisture conditions (including winds) and feedback composites is evaluated. Key Points The S-P feedback strength in SGP in April varies inter-annually The atmospheric dynamic features affect significantly the feedback strength composite moisture conditions are related to atmospheric circulation structure ©2014. American Geophysical Union. All Rights Reserved.

  8. Spring soil moisture-precipitation feedback in the Southern Great Plains: How is it related to large-scale atmospheric conditions?

    KAUST Repository

    Su, Hua; Yang, Zong-Liang; Dickinson, Robert E.; Wei, Jiangfeng

    2014-01-01

    The Southern Great Plains (SGP) has been shown as a region of significant soil moisture-precipitation (S-P) coupling. However, how strong evapotranspiration (ET) can affect regional precipitation remains largely unclear, impeding a full grasp of the S-P feedback in that area. The current study seeks to unravel, in a spring month (April), the potential role played by large-scale atmospheric conditions in shaping S (ET)-P feedback. Our regional climate modeling experiments demonstrate that the presence of anomalous low (high) pressure and cyclonic (anticyclonic) flows at the upper/middle troposphere over the relevant areas is associated with strongest (minimum) positive S-P feedback in the SGP. Their impacts are interpreted in terms of large-scale atmospheric dynamical disturbance, including the intensity and location of synoptic eddies. Further analyses of the vertical velocity fields corroborate these interpretations. In addition, the relationship between lower tropospheric moisture conditions (including winds) and feedback composites is evaluated. Key Points The S-P feedback strength in SGP in April varies inter-annually The atmospheric dynamic features affect significantly the feedback strength composite moisture conditions are related to atmospheric circulation structure ©2014. American Geophysical Union. All Rights Reserved.

  9. Decentralized State-Observer-Based Traffic Density Estimation of Large-Scale Urban Freeway Network by Dynamic Model

    Directory of Open Access Journals (Sweden)

    Yuqi Guo

    2017-08-01

    Full Text Available In order to estimate traffic densities in a large-scale urban freeway network in an accurate and timely fashion when traffic sensors do not cover the freeway network completely and thus only local measurement data can be utilized, this paper proposes a decentralized state observer approach based on a macroscopic traffic flow model. Firstly, by using the well-known cell transmission model (CTM, the urban freeway network is modeled in the way of distributed systems. Secondly, based on the model, a decentralized observer is designed. With the help of the Lyapunov function and S-procedure theory, the observer gains are computed by using linear matrix inequality (LMI technique. So, the traffic densities of the whole road network can be estimated by the designed observer. Finally, this method is applied to the outer ring of the Beijing’s second ring road and experimental results demonstrate the effectiveness and applicability of the proposed approach.

  10. A Stream Tilling Approach to Surface Area Estimation for Large Scale Spatial Data in a Shared Memory System

    Directory of Open Access Journals (Sweden)

    Liu Jiping

    2017-12-01

    Full Text Available Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.

  11. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    Science.gov (United States)

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  12. An Investigation of the High Efficiency Estimation Approach of the Large-Scale Scattered Point Cloud Normal Vector

    Directory of Open Access Journals (Sweden)

    Xianglin Meng

    2018-03-01

    Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.

  13. Large-scale wind power integration and wholesale electricity trading benefits: Estimation via an ex post approach

    International Nuclear Information System (INIS)

    Gil, Hugo A.; Gomez-Quiles, Catalina; Riquelme, Jesus

    2012-01-01

    The integration of large-scale wind power has brought about a series of challenges to the power industry, but at the same time a number of benefits are being realized. Among those, the ability of wind power to cause a decline in the electricity market prices has been recognized. In quantifying this effect, some models used in recent years are based on simulations of the market supply-side and the price clearing process. The accuracy of the estimates depend on the quality of the input data, the veracity of the adopted scenarios and the rigorousness of the solution technique. In this work, a series of econometric techniques based on actual ex post wind power and electricity price data are implemented for the estimation of the impact of region-wide wind power integration on the local electricity market clearing prices and the trading savings that stem from this effect. The model is applied to the case of Spain, where the estimated savings are compared against actual credit and bonus expenses to ratepayers. The implications and extent of these results for current and future renewable energy policy-making are discussed. - Highlights: ► Wholesale electricity market trading benefits by wind power are quantified. ► Actual wind power forecast-based bids and electricity price data from Spain are used. ► Different econometric tools are used and compared for improved estimation accuracy. ► Estimated benefits outweigh current credit overhead paid to wind farms in Spain. ► An economically efficient benefit surplus allocation framework is proposed.

  14. Magma viscosity estimation based on analysis of erupted products. Potential assessment for large-scale pyroclastic eruptions

    International Nuclear Information System (INIS)

    Takeuchi, Shingo

    2010-01-01

    After the formulation of guidelines for volcanic hazards in site evaluation for nuclear installations (e.g. JEAG4625-2009), it is required to establish appropriate methods to assess potential of large-scale pyroclastic eruptions at long-dormant volcanoes, which is one of the most hazardous volcanic phenomena on the safety of the installations. In considering the volcanic dormancy, magma eruptability is an important concept. The magma eruptability is dominantly controlled by magma viscosity, which can be estimated from petrological analysis of erupted materials. Therefore, viscosity estimation of magmas erupted in past eruptions should provide important information to assess future activities at hazardous volcanoes. In order to show the importance of magma viscosity in the concept of magma eruptability, this report overviews dike propagation processes from a magma chamber and nature of magma viscosity. Magma viscosity at pre-eruptive conditions of magma chambers were compiled based on previous petrological studies on past eruptions in Japan. There are only 16 examples of eruptions at 9 volcanoes satisfying data requirement for magma viscosity estimation. Estimated magma viscosities range from 10 2 to 10 7 Pa·s for basaltic to rhyolitic magmas. Most of examples fall below dike propagation limit of magma viscosity (ca. 10 6 Pa·s) estimated based on a dike propagation model. Highly viscous magmas (ca. 10 7 Pa·s) than the dike propagation limit are considered to lose eruptability which is the ability to form dikes and initiate eruptions. However, in some cases, small precursory eruptions of less viscous magmas commonly occurred just before climactic eruptions of the highly viscous magmas, suggesting that the precursory dike propagation by the less viscous magmas induced the following eruptions of highly viscous magmas (ca. 10 7 Pa·s). (author)

  15. Estimation of Truck Trips on Large-Scale Irrigation Project: A Combinatory Input-Output Commodity-Based Approach

    Directory of Open Access Journals (Sweden)

    Ackchai Sirikijpanichkul

    2015-01-01

    Full Text Available For the agricultural-based countries, the requirement on transportation infrastructure should not only be limited to accommodate general traffic but also the transportation of crop and agricultural products during the harvest seasons. Most of the past researches focus on the development of truck trip estimation techniques for urban, statewide, or nationwide freight movement but neglect the importance of rural freight movement which contributes to pavement deterioration on rural roads especially during harvest seasons. Recently, the Thai Government initiated a plan to construct a network of reservoirs within the northeastern region, aiming at improving existing irrigation system particularly in the areas where a more effective irrigation system is needed. It is expected to bring in new opportunities on expanding the cultivation areas, increasing the economy of scale and enlarging the extent market of area. As a consequence, its effects on truck trip generation needed to be investigated to assure the service quality of related transportation infrastructure. This paper proposes a combinatory input-output commodity-based approach to estimate truck trips on rural highway infrastructure network. The large-scale irrigation project for the northeastern of Thailand is demonstrated as a case study.

  16. Towards a Quantitative Use of Satellite Remote Sensing in Crop Growth Models for Large Scale Agricultural Production Estimate (Invited)

    Science.gov (United States)

    Defourny, P.

    2013-12-01

    The development of better agricultural monitoring capabilities is clearly considered as a critical step for strengthening food production information and market transparency thanks to timely information about crop status, crop area and yield forecasts. The documentation of global production will contribute to tackle price volatility by allowing local, national and international operators to make decisions and anticipate market trends with reduced uncertainty. Several operational agricultural monitoring systems are currently operating at national and international scales. Most are based on the methods derived from the pioneering experiences completed some decades ago, and use remote sensing to qualitatively compare one year to the others to estimate the risks of deviation from a normal year. The GEO Agricultural Monitoring Community of Practice described the current monitoring capabilities at the national and global levels. An overall diagram summarized the diverse relationships between satellite EO and agriculture information. There is now a large gap between the current operational large scale systems and the scientific state of the art in crop remote sensing, probably because the latter mainly focused on local studies. The poor availability of suitable in-situ and satellite data over extended areas hampers large scale demonstrations preventing the much needed up scaling research effort. For the cropland extent, this paper reports a recent research achievement using the full ENVISAT MERIS 300 m archive in the context of the ESA Climate Change Initiative. A flexible combination of classification methods depending to the region of the world allows mapping the land cover as well as the global croplands at 300 m for the period 2008 2012. This wall to wall product is then compared with regards to the FP 7-Geoland 2 results obtained using as Landsat-based sampling strategy over the IGADD countries. On the other hand, the vegetation indices and the biophysical variables

  17. Climatological studies on precipitation features and large-scale atmospheric fields on the heavy rainfall days in the eastern part of Japan from the Baiu to midsummer season

    Science.gov (United States)

    Matsumoto, Kengo; Kato, Kuranoshin; Otani, Kazuo

    2017-04-01

    In East Asia the significant subtropical frontal zone called the Meiyu (in China) / Baiu (in Japan) appears in early summer (just before the midsummer) and the huge rainfall is brought due to the frequent appearance of the "heavy rainfall days" (referred to as HRDs hereafter) mainly in that western part. On the other hand, large-scale fields around the front in eastern Japan is rather different from that in western Japan but the total precipitation in the eastern Japan is still considerable compared to that in the other midlatitude regions. Thus, it is also interesting to examine how the rainfall characteristics and large-scale atmospheric fields on HRDs (with more than 50 mm/day) in the eastern Japan in the mature stage of the Baiu season (16 June 15 July), together with those in midsummer (1 31 August). Based on such scientific background, further analyses were performed in this study mainly with the daily and the hourly precipitation data and the NCEP/NCAR re-analysis date from 1971 to 2010, succeeding to our previous results (e.g., EGU2015). As reported at EGU2014 and 2015, about half of HRDs at Tokyo (eastern Japan) were related to the typhoon even in the Baiu season. Interestingly, half of HRDs were characterized by the large contribution of moderate rain less than 10 mm/h. While, the precipitation on HRDs at Tokyo in midsummer was mainly brought by the intense rainfall with more than 10 mm/h, in association with the typhoons. In the present study, we examined the composite meridional structure of the rainfall area along 140E. In the pattern only associated with a typhoons in the Baiu season (Pattern A), the heavy rainfall area (more than 50 mm/day) with large contribution of the intense rain (stronger than 10 mm/h) showed rather wide meridional extension. The area was characterized by the duration of the intermittent enhancement of the rainfall. In the pattern associated with a typhoon and a front (Pattern B), while the contribution ratio of the rainfall

  18. Estimating Tropical Cyclone Precipitation from Station Observations

    Institute of Scientific and Technical Information of China (English)

    REN Fumin; WANG Yongmei; WANG Xiaoling; LI Weijing

    2007-01-01

    In this paper, an objective technique for estimating the tropical cyclone (TC) precipitation from station observations is proposed. Based on a comparison between the Original Objective Method (OOM) and the Expert Subjective Method (ESM), the Objective Synoptic Analysis Technique (OSAT) for partitioning TC precipitation was developed by analyzing the western North Pacific (WNP) TC historical track and the daily precipitation datasets. Being an objective way of the ESM, OSAT overcomes the main problems in OOM,by changing two fixed parameters in OOM, the thresholds for the distance of the absolute TC precipitation (D0) and the TC size (D1), into variable parameters.Case verification for OSAT was also carried out by applying CMORPH (Climate Prediction Center MORPHing technique) daily precipitation measurements, which is NOAA's combined satellite precipitation measurement system. This indicates that OSAT is capable of distinguishing simultaneous TC precipitation rain-belts from those associated with different TCs or with middle-latitude weather systems.

  19. Satellite based radar interferometry to estimate large-scale soil water depletion from clay shrinkage: possibilities and limitations

    NARCIS (Netherlands)

    Brake, te B.; Hanssen, R.F.; Ploeg, van der M.J.; Rooij, de G.H.

    2013-01-01

    Satellite-based radar interferometry is a technique capable of measuring small surface elevation changes at large scales and with a high resolution. In vadose zone hydrology, it has been recognized for a long time that surface elevation changes due to swell and shrinkage of clayey soils can serve as

  20. What causes the differences between national estimates of carbon emissions from forest management and large-scale models?

    NARCIS (Netherlands)

    Groen, T.A.; Verkerk, P.J.; Böttcher, H.; Grassi, G.; Cienciala, E.; Black, K.G.; Fortin, M.J.; Koethke, M.; Lethonen, A.; Nabuurs, G.J.; Petrova, L.; Blujdea, V.

    2013-01-01

    Under the United Nations Framework Convention for Climate Change all Parties have to report on carbon emissions and removals from the forestry sector. Each Party can use its own approach and country specific data for this. Independently, large-scale models exist (e.g. EFISCEN and G4M as used in this

  1. Novel probabilistic and distributed algorithms for guidance, control, and nonlinear estimation of large-scale multi-agent systems

    Science.gov (United States)

    Bandyopadhyay, Saptarshi

    Multi-agent systems are widely used for constructing a desired formation shape, exploring an area, surveillance, coverage, and other cooperative tasks. This dissertation introduces novel algorithms in the three main areas of shape formation, distributed estimation, and attitude control of large-scale multi-agent systems. In the first part of this dissertation, we address the problem of shape formation for thousands to millions of agents. Here, we present two novel algorithms for guiding a large-scale swarm of robotic systems into a desired formation shape in a distributed and scalable manner. These probabilistic swarm guidance algorithms adopt an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled using tunable Markov chains. In the first algorithm - Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) - each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain that is constructed in real-time using feedback from the current swarm distribution. This PSG-IMC algorithm minimizes the expected cost of the transitions required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. In the second algorithm - Probabilistic Swarm Guidance using Optimal Transport (PSG-OT) - each agent determines its bin transition probabilities by solving an optimal transport problem, which is recast as a linear program. In the presence of perfect feedback of the current swarm distribution, this algorithm minimizes the given cost function, guarantees faster convergence, reduces the number of transitions for achieving the desired formation, and is robust to disturbances or damages to the formation. We demonstrate the effectiveness of these two proposed swarm

  2. Centralized manure digestion. Selection of locations and estimation of costs of large-scale manure storage application

    International Nuclear Information System (INIS)

    1995-03-01

    A study to assess the possibilities and the consequences of the use of existing Dutch large scale manure silos at centralised anaerobic digestion plants (CAD-plants) for manure and energy-rich organic wastes is carried out. Reconstruction of these large scale manure silos into digesters for a CAD-plant is not self-evident due to the high height/diameter ratio of these silos and the extra investments that have to be made for additional facilities for roofing, insulation, mixing and heating. From the results of an inventory and selection of large scale manure silos with a storage capacity above 1,500 m 3 it appeared that there are 21 locations in The Netherlands that can be qualified for realisation of a CAD plant with a processing capacity of 100 m 3 biomass (80% manure, 20% additives) per day. These locations are found in particular at the 'shortage-areas' for manure fertilisation in the Dutch provinces Groningen and Drenthe. Three of these 21 locations with large scale silos are considered to be the most suitable for realisation of a large scale CAD-plant. The selection is based on an optimal scale for a CAD-plant of 300 m 3 material (80% manure, 20% additives) to be processed per day and the most suitable consuming markets for the biogas produced at the CAD-plant. The three locations are at Middelharnis, Veendam, and Klazinaveen. Applying the conditions as used in this study and accounting for all costs for transport of manure, additives and end-product including the costs for the storage facilities, a break-even operation might be realised at a minimum income for the additives of approximately 50 Dutch guilders per m 3 (including TAV). This income price is considerably lower than the prevailing costs for tipping or processing of organic wastes in The Netherlands. This study revealed that a break-even exploitation of a large scale CAD-plant for the processing of manure with energy-rich additives is possible. (Abstract Truncated)

  3. Radar-Derived Quantitative Precipitation Estimation Based on Precipitation Classification

    Directory of Open Access Journals (Sweden)

    Lili Yang

    2016-01-01

    Full Text Available A method for improving radar-derived quantitative precipitation estimation is proposed. Tropical vertical profiles of reflectivity (VPRs are first determined from multiple VPRs. Upon identifying a tropical VPR, the event can be further classified as either tropical-stratiform or tropical-convective rainfall by a fuzzy logic (FL algorithm. Based on the precipitation-type fields, the reflectivity values are converted into rainfall rate using a Z-R relationship. In order to evaluate the performance of this rainfall classification scheme, three experiments were conducted using three months of data and two study cases. In Experiment I, the Weather Surveillance Radar-1988 Doppler (WSR-88D default Z-R relationship was applied. In Experiment II, the precipitation regime was separated into convective and stratiform rainfall using the FL algorithm, and corresponding Z-R relationships were used. In Experiment III, the precipitation regime was separated into convective, stratiform, and tropical rainfall, and the corresponding Z-R relationships were applied. The results show that the rainfall rates obtained from all three experiments match closely with the gauge observations, although Experiment II could solve the underestimation, when compared to Experiment I. Experiment III significantly reduced this underestimation and generated the most accurate radar estimates of rain rate among the three experiments.

  4. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  5. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    Science.gov (United States)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  6. Estimating the electricity prices, generation costs and CO_2 emissions of large scale wind energy exports from Ireland to Great Britain

    International Nuclear Information System (INIS)

    Cleary, Brendan; Duffy, Aidan; Bach, Bjarne; Vitina, Aisma; O’Connor, Alan; Conlon, Michael

    2016-01-01

    The share of wind generation in the Irish and British electricity markets is set to increase by 2020 due to renewable energy (RE) targets. The United Kingdom (UK) and Ireland have set ambitious targets which require 30% and 40% of electricity demand to come from RE, mainly wind, by 2020, respectively. Ireland has sufficient indigenous onshore wind energy resources to exceed the RE target, while the UK faces uncertainty in achieving its target. A possible solution for the UK is to import RE directly from large scale onshore and offshore wind energy projects in Ireland; this possibility has recently been explored by both governments but is currently on hold. Thus, the aim of this paper is to estimate the effects of large scale wind energy in the Irish and British electricity markets in terms of wholesale system marginal prices, total generation costs and CO_2 emissions. The results indicate when the large scale Irish-based wind energy projects are connected directly to the UK there is a decrease of 0.6% and 2% in the Irish and British wholesale system marginal prices under the UK National Grid slow progression scenario, respectively. - Highlights: • Modelling the Irish and British electricity markets. • Investigating the impacts of large scale wind energy within the markets. • Results indicate a reduction in wholesale system marginal prices in both markets. • Decrease in total generation costs and CO_2 emissions in both markets.

  7. Augmenting Satellite Precipitation Estimation with Lightning Information

    Energy Technology Data Exchange (ETDEWEB)

    Mahrooghy, Majid [Mississippi State University (MSU); Anantharaj, Valentine G [ORNL; Younan, Nicolas H. [Mississippi State University (MSU); Petersen, Walter A. [NASA Marshall Space Flight Center, Huntsville, AL; Hsu, Kuo-Lin [University of California, Irvine; Behrangi, Ali [Jet Propulsion Laboratory, Pasadena, CA; Aanstoos, James [Mississippi State University (MSU)

    2013-01-01

    We have used lightning information to augment the Precipitation Estimation from Remotely Sensed Imagery using an Artificial Neural Network - Cloud Classification System (PERSIANN-CCS). Co-located lightning data are used to segregate cloud patches, segmented from GOES-12 infrared data, into either electrified (EL) or non-electrified (NEL) patches. A set of features is extracted separately for the EL and NEL cloud patches. The features for the EL cloud patches include new features based on the lightning information. The cloud patches are classified and clustered using self-organizing maps (SOM). Then brightness temperature and rain rate (T-R) relationships are derived for the different clusters. Rain rates are estimated for the cloud patches based on their representative T-R relationship. The Equitable Threat Score (ETS) for daily precipitation estimates is improved by almost 12% for the winter season. In the summer, no significant improvements in ETS are noted.

  8. Actions and Beliefs : Estimating Distribution-Based Preferences Using a Large Scale Experiment with Probability Questions on Expectations

    NARCIS (Netherlands)

    Bellemare, C.; Kroger, S.; van Soest, A.H.O.

    2005-01-01

    We combine the choice data of proposers and responders in the ultimatum game, their expectations elicited in the form of subjective probability questions, and the choice data of proposers ("dictator") in a dictator game to estimate a structural model of decision making under uncertainty.We use a

  9. Fluctuations in the large-scale atmospheric circulation and ocean conditions associated with the dominant modes of wintertime precipitation variability for the contiguous United States

    International Nuclear Information System (INIS)

    Mitchell, T.P.; Blier, W.

    1994-01-01

    The historical Climatic Division record of monthly- and seasonal-mean wintertime precipitation totals are analyzed to document the dominant patterns of precipitation variability for the contiguous United States. The analysis technique employed is the Rotated Principal Component analysis. Time series for the leading patterns are related to global sea-surface temperatures (SSTs), and to gridded surface and upper-air analyses for the Northern Hemisphere

  10. The effects of spatial heterogeneity and subsurface lateral transfer on evapotranspiration estimates in large scale Earth system models

    Science.gov (United States)

    Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.

    2017-12-01

    Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.

  11. An Improved GRACE Terrestrial Water Storage Assimilation System For Estimating Large-Scale Soil Moisture and Shallow Groundwater

    Science.gov (United States)

    Girotto, M.; De Lannoy, G. J. M.; Reichle, R. H.; Rodell, M.

    2015-12-01

    The Gravity Recovery And Climate Experiment (GRACE) mission is unique because it provides highly accurate column integrated estimates of terrestrial water storage (TWS) variations. Major limitations of GRACE-based TWS observations are related to their monthly temporal and coarse spatial resolution (around 330 km at the equator), and to the vertical integration of the water storage components. These challenges can be addressed through data assimilation. To date, it is still not obvious how best to assimilate GRACE-TWS observations into a land surface model, in order to improve hydrological variables, and many details have yet to be worked out. This presentation discusses specific recent features of the assimilation of gridded GRACE-TWS data into the NASA Goddard Earth Observing System (GEOS-5) Catchment land surface model to improve soil moisture and shallow groundwater estimates at the continental scale. The major recent advancements introduced by the presented work with respect to earlier systems include: 1) the assimilation of gridded GRACE-TWS data product with scaling factors that are specifically derived for data assimilation purposes only; 2) the assimilation is performed through a 3D assimilation scheme, in which reasonable spatial and temporal error standard deviations and correlations are exploited; 3) the analysis step uses an optimized calculation and application of the analysis increments; 4) a poor-man's adaptive estimation of a spatially variable measurement error. This work shows that even if they are characterized by a coarse spatial and temporal resolution, the observed column integrated GRACE-TWS data have potential for improving our understanding of soil moisture and shallow groundwater variations.

  12. Estimates of occupational safety and health impacts resulting from large-scale production of major photovoltaic technologies

    Energy Technology Data Exchange (ETDEWEB)

    Owens, T.; Ungers, L.; Briggs, T.

    1980-08-01

    The purpose of this study is to estimate both quantitatively and qualitatively, the worker and societal risks attributable to four photovoltaic cell (solar cell) production processes. Quantitative risk values were determined by use of statistics from the California semiconductor industry. The qualitative risk assessment was performed using a variety of both governmental and private sources of data. The occupational health statistics derived from the semiconductor industry were used to predict injury and fatality levels associated with photovoltaic cell manufacturing. The use of these statistics to characterize the two silicon processes described herein is defensible from the standpoint that many of the same process steps and materials are used in both the semiconductor and photovoltaic industries. These health statistics are less applicable to the gallium arsenide and cadmium sulfide manufacturing processes, primarily because of differences in the materials utilized. Although such differences tend to discourage any absolute comparisons among the four photovoltaic cell production processes, certain relative comparisons are warranted. To facilitate a risk comparison of the four processes, the number and severity of process-related chemical hazards were assessed. This qualitative hazard assessment addresses both the relative toxicity and the exposure potential of substances in the workplace. In addition to the worker-related hazards, estimates of process-related emissions and wastes are also provided.

  13. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    Science.gov (United States)

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  14. Variability in warm-season atmospheric circulation and precipitation patterns over subtropical South America: relationships between the South Atlantic convergence zone and large-scale organized convection over the La Plata basin

    Science.gov (United States)

    Mattingly, Kyle S.; Mote, Thomas L.

    2017-01-01

    Warm-season precipitation variability over subtropical South America is characterized by an inverse relationship between the South Atlantic convergence zone (SACZ) and precipitation over the central and western La Plata basin of southeastern South America. This study extends the analysis of this "South American Seesaw" precipitation dipole to relationships between the SACZ and large, long-lived mesoscale convective systems (LLCSs) over the La Plata basin. By classifying SACZ events into distinct continental and oceanic categories and building a logistic regression model that relates LLCS activity across the region to continental and oceanic SACZ precipitation, a detailed account of spatial variability in the out-of-phase coupling between the SACZ and large-scale organized convection over the La Plata basin is provided. Enhanced precipitation in the continental SACZ is found to result in increased LLCS activity over northern, northeastern, and western sections of the La Plata basin, in association with poleward atmospheric moisture flux from the Amazon basin toward these regions, and a decrease in the probability of LLCS occurrence over the southeastern La Plata basin. Increased oceanic SACZ precipitation, however, was strongly related to reduced atmospheric moisture and decreased probability of LLCS occurrence over nearly the entire La Plata basin. These results suggest that continental SACZ activity and large-scale organized convection over the northern and eastern sections of the La Plata basin are closely tied to atmospheric moisture transport from the Amazon basin, while the warm coastal Brazil Current may also play an important role as an evaporative moisture source for LLCSs over the central and western La Plata basin.

  15. #nowplaying Madonna: a large-scale evaluation on estimating similarities between music artists and between movies from microblogs.

    Science.gov (United States)

    Schedl, Markus

    2012-01-01

    Different term weighting techniques such as [Formula: see text] or BM25 have been used intensely for manifold text-based information retrieval tasks. Their use for modeling term profiles for named entities and subsequent calculation of similarities between these named entities have been studied to a much smaller extent. The recent trend of microblogging made available massive amounts of information about almost every topic around the world. Therefore, microblogs represent a valuable source for text-based named entity modeling. In this paper, we present a systematic and comprehensive evaluation of different term weighting measures , normalization techniques , query schemes , index term sets , and similarity functions for the task of inferring similarities between named entities, based on data extracted from microblog posts . We analyze several thousand combinations of choices for the above mentioned dimensions, which influence the similarity calculation process, and we investigate in which way they impact the quality of the similarity estimates. Evaluation is performed using three real-world data sets: two collections of microblogs related to music artists and one related to movies. For the music collections, we present results of genre classification experiments using as benchmark genre information from allmusic.com. For the movie collection, we present results of multi-class classification experiments using as benchmark categories from IMDb. We show that microblogs can indeed be exploited to model named entity similarity with remarkable accuracy, provided the correct settings for the analyzed aspects are used. We further compare the results to those obtained when using Web pages as data source.

  16. The effect of the geomagnetic field on cosmic ray energy estimates and large scale anisotropy searches on data from the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Abreu, P.; /Lisbon, IST; Aglietta, M.; /IFSI, Turin; Ahn, E.J.; /Fermilab; Albuquerque, I.F.M.; /Sao Paulo U.; Allard, D.; /APC, Paris; Allekotte, I.; /Centro Atomico Bariloche; Allen, J.; /New York U.; Allison, P.; /Ohio State U.; Alvarez Castillo, J.; /Mexico U., ICN; Alvarez-Muniz, J.; /Santiago de Compostela U.; Ambrosio, M.; /Naples U. /INFN, Naples /Nijmegen U., IMAPP

    2011-11-01

    We present a comprehensive study of the influence of the geomagnetic field on the energy estimation of extensive air showers with a zenith angle smaller than 60{sup o}, detected at the Pierre Auger Observatory. The geomagnetic field induces an azimuthal modulation of the estimated energy of cosmic rays up to the {approx} 2% level at large zenith angles. We present a method to account for this modulation of the reconstructed energy. We analyse the effect of the modulation on large scale anisotropy searches in the arrival direction distributions of cosmic rays. At a given energy, the geomagnetic effect is shown to induce a pseudo-dipolar pattern at the percent level in the declination distribution that needs to be accounted for. In this work, we have identified and quantified a systematic uncertainty affecting the energy determination of cosmic rays detected by the surface detector array of the Pierre Auger Observatory. This systematic uncertainty, induced by the influence of the geomagnetic field on the shower development, has a strength which depends on both the zenith and the azimuthal angles. Consequently, we have shown that it induces distortions of the estimated cosmic ray event rate at a given energy at the percent level in both the azimuthal and the declination distributions, the latter of which mimics an almost dipolar pattern. We have also shown that the induced distortions are already at the level of the statistical uncertainties for a number of events N {approx_equal} 32 000 (we note that the full Auger surface detector array collects about 6500 events per year with energies above 3 EeV). Accounting for these effects is thus essential with regard to the correct interpretation of large scale anisotropy measurements taking explicitly profit from the declination distribution.

  17. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  18. Large Scale Solar Heating

    DEFF Research Database (Denmark)

    Heller, Alfred

    2001-01-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the simulation tool for design studies and on a local energy planning case. The evaluation was mainly carried out...... model is designed and validated on the Marstal case. Applying the Danish Reference Year, a design tool is presented. The simulation tool is used for proposals for application of alternative designs, including high-performance solar collector types (trough solar collectors, vaccum pipe collectors......). Simulation programs are proposed as control supporting tool for daily operation and performance prediction of central solar heating plants. Finaly the CSHP technolgy is put into persepctive with respect to alternatives and a short discussion on the barries and breakthrough of the technology are given....

  19. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  20. Transport of Radon Gas into a Tunnel at Yucca Mountain-Estimating Large-Scale Fractured Tuff Hydraulic Properties and Implications for the Operation of the Ventilation System

    International Nuclear Information System (INIS)

    Unger, A.; Finsterle, S.; Bodvarsson, G.

    2003-01-01

    Radon gas concentrations have been monitored as part of the operation of a tunnel (the Exploratory Studies Facility-ESF) at Yucca Mountain to ensure worker safety. The objective of this study was to examine the potential use of the radon data to estimate large-scale formation properties of fractured tuffs. This objective was examined by developing a numerical model, based upon the characteristics of the ESF and the Topopah Spring welded (TSw) tuff unit, capable of predicting radon concentrations for prescribed ventilation conditions. The model was used to address two specific issues. First, it was used to estimate the permeability and porosity of the fractures in the TSw at the length scale of the ESF and extending tens of meters into the TSw, which surrounds the ESF. Second, the model was used to understand the mechanism leading to radon concentrations exceeding a specified level within the ESF. The mechanism controlling radon concentrations in the ESF is a function of atmospheric barometric fluctuations being propagated down the ESF along with ventilated air flow and the slight suction induced by the ventilation exhaust fans at the South Portal of the ESF. These pressure fluctuations are dampened in the TSw fracture continuum according to its permeability and porosity. Consequently, as the barometric pressure in the ESF drops rapidly, formation gases from the TSw are pulled into the ESF, resulting in an increase in radon concentrations. Model calibration to both radon concentrations measured in the ESF and gas-phase pressure fluctuations in the TSw yielded concurrent estimates of TSw fracture permeability and porosity of l x 10 -11 m 2 and 0.00034, respectively. The calibrated model was then used as a design tool to predict the effect of adjusting the current ventilation-system operation strategy for reducing the probability of radon gas concentrations exceeding a specified level

  1. Incorporating Satellite Precipitation Estimates into a Radar-Gauge Multi-Sensor Precipitation Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Yuxiang He

    2018-01-01

    Full Text Available This paper presents a new and enhanced fusion module for the Multi-Sensor Precipitation Estimator (MPE that would objectively blend real-time satellite quantitative precipitation estimates (SQPE with radar and gauge estimates. This module consists of a preprocessor that mitigates systematic bias in SQPE, and a two-way blending routine that statistically fuses adjusted SQPE with radar estimates. The preprocessor not only corrects systematic bias in SQPE, but also improves the spatial distribution of precipitation based on SQPE and makes it closely resemble that of radar-based observations. It uses a more sophisticated radar-satellite merging technique to blend preprocessed datasets, and provides a better overall QPE product. The performance of the new satellite-radar-gauge blending module is assessed using independent rain gauge data over a five-year period between 2003–2007, and the assessment evaluates the accuracy of newly developed satellite-radar-gauge (SRG blended products versus that of radar-gauge products (which represents MPE algorithm currently used in the NWS (National Weather Service operations over two regions: (I Inside radar effective coverage and (II immediately outside radar coverage. The outcomes of the evaluation indicate (a ingest of SQPE over areas within effective radar coverage improve the quality of QPE by mitigating the errors in radar estimates in region I; and (b blending of radar, gauge, and satellite estimates over region II leads to reduction of errors relative to bias-corrected SQPE. In addition, the new module alleviates the discontinuities along the boundaries of radar effective coverage otherwise seen when SQPE is used directly to fill the areas outside of effective radar coverage.

  2. A comparative study of outlier detection for large-scale traffic data by one-class SVM and kernel density estimation

    Science.gov (United States)

    Ngan, Henry Y. T.; Yung, Nelson H. C.; Yeh, Anthony G. O.

    2015-02-01

    This paper aims at presenting a comparative study of outlier detection (OD) for large-scale traffic data. The traffic data nowadays are massive in scale and collected in every second throughout any modern city. In this research, the traffic flow dynamic is collected from one of the busiest 4-armed junction in Hong Kong in a 31-day sampling period (with 764,027 vehicles in total). The traffic flow dynamic is expressed in a high dimension spatial-temporal (ST) signal format (i.e. 80 cycles) which has a high degree of similarities among the same signal and across different signals in one direction. A total of 19 traffic directions are identified in this junction and lots of ST signals are collected in the 31-day period (i.e. 874 signals). In order to reduce its dimension, the ST signals are firstly undergone a principal component analysis (PCA) to represent as (x,y)-coordinates. Then, these PCA (x,y)-coordinates are assumed to be conformed as Gaussian distributed. With this assumption, the data points are further to be evaluated by (a) a correlation study with three variant coefficients, (b) one-class support vector machine (SVM) and (c) kernel density estimation (KDE). The correlation study could not give any explicit OD result while the one-class SVM and KDE provide average 59.61% and 95.20% DSRs, respectively.

  3. Application of the extreme value theory to beam loss estimates in the SPIRAL2 linac based on large scale Monte Carlo computations

    Directory of Open Access Journals (Sweden)

    R. Duperrier

    2006-04-01

    Full Text Available The influence of random perturbations of high intensity accelerator elements on the beam losses is considered. This paper presents the error sensitivity study which has been performed for the SPIRAL2 linac in order to define the tolerances for the construction. The proposed driver aims to accelerate a 5 mA deuteron beam up to 20   A MeV and a 1 mA ion beam for q/A=1/3 up to 14.5 A MeV. It is a continuous wave regime linac, designed for a maximum efficiency in the transmission of intense beams and a tunable energy. It consists in an injector (two   ECRs   sources+LEBTs with the possibility to inject from several sources+radio frequency quadrupole followed by a superconducting section based on an array of independently phased cavities where the transverse focalization is performed with warm quadrupoles. The correction scheme and the expected losses are described. The extreme value theory is used to estimate the expected beam losses. The described method couples large scale computations to obtain probability distribution functions. The bootstrap technique is used to provide confidence intervals associated to the beam loss predictions. With such a method, it is possible to measure the risk to loose a few watts in this high power linac (up to 200 kW.

  4. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  5. Large Scale Density Estimation of Blue and Fin Whales: Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density

    Science.gov (United States)

    2015-09-30

    titled “Ocean Basin Impact of Ambient Noise on Marine Mammal Detectability, Distribution, and Acoustic Communication ”. Patterns and trends of ocean... mammals in response to potentially negative interactions with human activity requires knowledge of how many animals are present in an area during a...specific time period. Many marine mammal species are relatively hard to sight, making standard visual methods of density estimation difficult and

  6. Soil carbon sequestration due to post-Soviet cropland abandonment: estimates from a large-scale soil organic carbon field inventory.

    Science.gov (United States)

    Wertebach, Tim-Martin; Hölzel, Norbert; Kämpf, Immo; Yurtaev, Andrey; Tupitsin, Sergey; Kiehl, Kathrin; Kamp, Johannes; Kleinebecker, Till

    2017-09-01

    The break-up of the Soviet Union in 1991 triggered cropland abandonment on a continental scale, which in turn led to carbon accumulation on abandoned land across Eurasia. Previous studies have estimated carbon accumulation rates across Russia based on large-scale modelling. Studies that assess carbon sequestration on abandoned land based on robust field sampling are rare. We investigated soil organic carbon (SOC) stocks using a randomized sampling design along a climatic gradient from forest steppe to Sub-Taiga in Western Siberia (Tyumen Province). In total, SOC contents were sampled on 470 plots across different soil and land-use types. The effect of land use on changes in SOC stock was evaluated, and carbon sequestration rates were calculated for different age stages of abandoned cropland. While land-use type had an effect on carbon accumulation in the topsoil (0-5 cm), no independent land-use effects were found for deeper SOC stocks. Topsoil carbon stocks of grasslands and forests were significantly higher than those of soils managed for crops and under abandoned cropland. SOC increased significantly with time since abandonment. The average carbon sequestration rate for soils of abandoned cropland was 0.66 Mg C ha -1  yr -1 (1-20 years old, 0-5 cm soil depth), which is at the lower end of published estimates for Russia and Siberia. There was a tendency towards SOC saturation on abandoned land as sequestration rates were much higher for recently abandoned (1-10 years old, 1.04 Mg C ha -1  yr -1 ) compared to earlier abandoned crop fields (11-20 years old, 0.26 Mg C ha -1  yr -1 ). Our study confirms the global significance of abandoned cropland in Russia for carbon sequestration. Our findings also suggest that robust regional surveys based on a large number of samples advance model-based continent-wide SOC prediction. © 2017 John Wiley & Sons Ltd.

  7. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  8. Connecting Satellite-Based Precipitation Estimates to Users

    Science.gov (United States)

    Huffman, George J.; Bolvin, David T.; Nelkin, Eric

    2018-01-01

    Beginning in 1997, the Merged Precipitation Group at NASA Goddard has distributed gridded global precipitation products built by combining satellite and surface gauge data. This started with the Global Precipitation Climatology Project (GPCP), then the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA), and recently the Integrated Multi-satellitE Retrievals for the Global Precipitation Measurement (GPM) mission (IMERG). This 20+-year (and on-going) activity has yielded an important set of insights and lessons learned for making state-of-the-art precipitation data accessible to the diverse communities of users. Merged-data products critically depend on the input sensors and the retrieval algorithms providing accurate, reliable estimates, but it is also important to provide ancillary information that helps users determine suitability for their application. We typically provide fields of estimated random error, and recently reintroduced the quality index concept at user request. Also at user request we have added a (diagnostic) field of estimated precipitation phase. Over time, increasingly more ancillary fields have been introduced for intermediate products that give expert users insight into the detailed performance of the combination algorithm, such as individual merged microwave and microwave-calibrated infrared estimates, the contributing microwave sensor types, and the relative influence of the infrared estimate.

  9. Opportunities and challenges for evaluating precipitation estimates during GPM mission

    Energy Technology Data Exchange (ETDEWEB)

    Amitai, E. [George Mason Univ. and NASA Goddard Space Flight Center, Greenbelt, MD (United States); NASA Goddard Space Flight Center, Greenbelt, MD (United States); Llort, X.; Sempere-Torres, D. [GRAHI/Univ. Politecnica de Catalunya, Barcelona (Spain)

    2006-10-15

    Data assimilation in conjunction with numerical weather prediction and a variety of hydrologic applications now depend on satellite observations of precipitation. However, providing values of precipitation is not sufficient unless they are accompanied by the associated uncertainty estimates. The main approach of quantifying satellite precipitation uncertainties generally requires establishment of reliable uncertainty estimates for the ground validation rainfall products. This paper discusses several of the relevant validation concepts evolving from the tropical rainfall measuring mission (TRMM) era to the global precipitation measurement mission (GPM) era in the context of determining and reducing uncertainties of ground and space-based radar rainfall estimates. From comparisons of probability distribution functions of rain rates derived from TRMM precipitation radar and co-located ground based radar data - using the new NASA TRMM radar rainfall products (version 6) - this paper provides (1) a brief review of the importance of comparing pdfs of rain rate for statistical and physical verification of space-borne radar estimates of precipitation; (2) a brief review of how well the ground validation estimates compare to the TRMM radar retrieved estimates; and (3) discussion on opportunities and challenges to determine and reduce the uncertainties in space-based and ground-based radar estimates of rain rate distributions. (orig.)

  10. Comparing NEXRAD Operational Precipitation Estimates and Raingage Observations of Intense Precipitation in the Missouri River Basin.

    Science.gov (United States)

    Young, C. B.

    2002-05-01

    Accurate observation of precipitation is critical to the study and modeling of land surface hydrologic processes. NEXRAD radar-based precipitation estimates are increasingly used in field experiments, hydrologic modeling, and water and energy budget studies due to their high spatial and temporal resolution, national coverage, and perceived accuracy. Extensive development and testing of NEXRAD precipitation algorithms have been carried out in the Southern Plains. Previous studies (Young et al. 2000, Young et al. 1999, Smith et al. 1996) indicate that NEXRAD operational products tend to underestimate precipitation at light rain rates. This study investigates the performance of NEXRAD precipitation estimates of high-intensity rainfall, focusing on flood-producing storms in the Missouri River Basin. NEXRAD estimates for these storms are compared with data from multiple raingage networks, including NWS recording and non-recording gages and ALERT raingage data for the Kansas City metropolitan area. Analyses include comparisons of gage and radar data at a wide range of temporal and spatial scales. Particular attention is paid to the October 4th, 1998, storm that produced severe flooding in Kansas City. NOTE: The phrase `NEXRAD operational products' in this abstract includes precipitation estimates generated using the Stage III and P1 algorithms. Both of these products estimate hourly accumulations on the (approximately) 4 km HRAP grid.

  11. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  12. A test for Improvement of high resolution Quantitative Precipitation Estimation for localized heavy precipitation events

    Science.gov (United States)

    Lee, Jung-Hoon; Roh, Joon-Woo; Park, Jeong-Gyun

    2017-04-01

    Accurate estimation of precipitation is one of the most difficult and significant tasks in the area of weather diagnostic and forecasting. In the Korean Peninsula, heavy precipitations are caused by various physical mechanisms, which are affected by shortwave trough, quasi-stationary moisture convergence zone among varying air masses, and a direct/indirect effect of tropical cyclone. In addition to, various geographical and topographical elements make production of temporal and spatial distribution of precipitation is very complicated. Especially, localized heavy rainfall events in South Korea generally arise from mesoscale convective systems embedded in these synoptic scale disturbances. In weather radar data with high temporal and spatial resolution, accurate estimation of rain rate from radar reflectivity data is too difficult. Z-R relationship (Marshal and Palmer 1948) have adapted representatively. In addition to, several methods such as support vector machine (SVM), neural network, Fuzzy logic, Kriging were utilized in order to improve the accuracy of rain rate. These methods show the different quantitative precipitation estimation (QPE) and the performances of accuracy are different for heavy precipitation cases. In this study, in order to improve the accuracy of QPE for localized heavy precipitation, ensemble method for Z-R relationship and various techniques was tested. This QPE ensemble method was developed by a concept based on utilizing each advantage of precipitation calibration methods. The ensemble members were produced for a combination of different Z-R coefficient and calibration method.

  13. Estimating environmental value losses from earth materials excavation and infilling for large-scale airport construction: a case of Dalian Offshore Airport, Dalian, China.

    Science.gov (United States)

    Yan, Hua-Kun; Wang, Nuo; Wu, Nuan; Song, Nan-Qi; Zhu, Dao-Li

    2017-09-01

    Large-scale airport construction removes large quantities of earth materials, resulting in serious environmental pollution and ecosystem damage. However, studies of environmental concerns caused by the materials used in airport construction are still preliminary, and those case studies on the impacts of large-scale offshore airport development are very limited. China's Dalian Offshore Airport is considered here to study the environmental value loss from 240 million m 3 of materials excavations and 21 km 2 of artificial island infillings. The findings show that the calculated annual environmental value loss for the development of the Dalian Offshore Airport is approximately US$7.75 million, including US$1.81 million and US$1.47 million of direct economic loss of timber resources and marine biology resources, respectively, and US$1.53 million and US$2.79 million value losses of forest and marine ecosystem services damaged caused by materials excavation and infilling, respectively. The conclusions of this study provide an important foundation to quantitatively analyse the environmental impact of the Dalian Offshore Airport's development and can be used as a reference for similar engineering and environment impact assessment programs.

  14. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  15. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  16. The assessment of Global Precipitation Measurement estimates over the Indian subcontinent

    Science.gov (United States)

    Murali Krishna, U. V.; Das, Subrata Kumar; Deshpande, Sachin M.; Doiphode, S. L.; Pandithurai, G.

    2017-08-01

    Accurate and real-time precipitation estimation is a challenging task for current and future spaceborne measurements, which is essential to understand the global hydrological cycle. Recently, the Global Precipitation Measurement (GPM) satellites were launched as a next-generation rainfall mission for observing the global precipitation characteristics. The purpose of the GPM is to enhance the spatiotemporal resolution of global precipitation. The main objective of the present study is to assess the rainfall products from the GPM, especially the Integrated Multi-satellitE Retrievals for the GPM (IMERG) data by comparing with the ground-based observations. The multitemporal scale evaluations of rainfall involving subdaily, diurnal, monthly, and seasonal scales were performed over the Indian subcontinent. The comparison shows that the IMERG performed better than the Tropical Rainfall Measuring Mission (TRMM)-3B42, although both rainfall products underestimated the observed rainfall compared to the ground-based measurements. The analyses also reveal that the TRMM-3B42 and IMERG data sets are able to represent the large-scale monsoon rainfall spatial features but are having region-specific biases. The IMERG shows significant improvement in low rainfall estimates compared to the TRMM-3B42 for selected regions. In the spatial distribution, the IMERG shows higher rain rates compared to the TRMM-3B42, due to its enhanced spatial and temporal resolutions. Apart from this, the characteristics of raindrop size distribution (DSD) obtained from the GPM mission dual-frequency precipitation radar is assessed over the complex mountain terrain site in the Western Ghats, India, using the DSD measured by a Joss-Waldvogel disdrometer.

  17. Estimating Sediment Delivery to The Rio Maranon, Peru Prior to Large-Scale Hydropower Developments Using High Resolution Imagery from Google Earth and a DJI Phantom 3 Drone

    Science.gov (United States)

    Goode, J. R.; Candelaria, T.; Kramer, N. R.; Hill, A. F.

    2016-12-01

    As global energy demands increase, generating hydroelectric power by constructing dams and reservoirs on large river systems is increasingly seen as a renewable alternative to fossil fuels, especially in emerging economies. Many large-scale hydropower projects are located in steep mountainous terrain, where environmental factors have the potential to conspire against the sustainability and success of such projects. As reservoir storage capacity decreases when sediment builds up behind dams, high sediment yields can limit project life expectancy and overall hydropower viability. In addition, episodically delivered sediment from landslides can make quantifying sediment loads difficult. These factors, combined with remote access, limit the critical data needed to effectively evaluate development decisions. In the summer of 2015, we conducted a basic survey to characterize the geomorphology, hydrology and ecology of 620 km of the Rio Maranon, Peru - a major tributary to the Amazon River, which flows north from the semi-arid Peruvian Andes - prior to its dissection by several large hydropower dams. Here we present one component of this larger study: a first order analysis of potential sediment inputs to the Rio Maranon, Peru. To evaluate sediment delivery and storage in this system, we used high resolution Google Earth imagery to delineate landslides, combined with high resolution imagery from a DJI Phantom 3 Drone, flown at alluvial fan inputs to the river in the field. Because hillslope-derived sediment inputs from headwater tributaries are important to overall ecosystem health in large river systems, our study has the potential to contribute to the understanding the impacts of large Andean dams on sediment connectivity to the Amazon basin.

  18. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  19. Improving precipitation estimates over the western United States using GOES-R precipitation data

    Science.gov (United States)

    Karbalaee, N.; Kirstetter, P. E.; Gourley, J. J.

    2017-12-01

    Satellite remote sensing data with fine spatial and temporal resolution are widely used for precipitation estimation for different applications such as hydrological modeling, storm prediction, and flash flood monitoring. The Geostationary Operational Environmental Satellites-R series (GOES-R) is the next generation of environmental satellites that provides hydrologic, atmospheric, and climatic information every 30 seconds over the western hemisphere. The high-resolution and low-latency of GOES-R observations is essential for the monitoring and prediction of floods, specifically in the Western United States where the vantage point of space can complement the degraded weather radar coverage of the NEXRAD network. The GOES-R rainfall rate algorithm will yield deterministic quantitative precipitation estimates (QPE). Accounting for inherent uncertainties will further advance the GOES-R QPEs since with quantifiable error bars, the rainfall estimates can be more readily fused with ground radar products. On the ground, the high-resolution NEXRAD-based precipitation estimation from the Multi-Radar/Multi-Sensor (MRMS) system, which is now operational in the National Weather Service (NWS), is challenged due to a lack of suitable coverage of operational weather radars over complex terrain. Distribution of QPE uncertainties associated with the GOES-R deterministic retrievals are derived and analyzed using MRMS over regions with good radar coverage. They will be merged with MRMS-based probabilistic QPEs developed to advance multisensor QPE integration. This research aims at improving precipitation estimation over the CONUS by combining the observations from GOES-R and MRMS to provide consistent, accurate and fine resolution precipitation rates with uncertainties over the CONUS.

  20. Estimation of precipitable water from surface dew point temperature

    International Nuclear Information System (INIS)

    Abdel Wahab, M.; Sharif, T.A.

    1991-09-01

    The Reitan (1963) regression equation which is of the form lnw=a+bT d has been examined and tested to estimate precipitable water content from surface dew point temperature at different locations. The study confirms that the slope of this equation (b) remains constant at the value of .0681 deg. C., while the intercept (a) changes rapidly with the latitude. The use of the variable intercept can improve the estimated result by 2%. (author). 6 refs, 4 figs, 3 tabs

  1. Assessment of satellite-based precipitation estimates over Paraguay

    Science.gov (United States)

    Oreggioni Weiberlen, Fiorella; Báez Benítez, Julián

    2018-04-01

    Satellite-based precipitation estimates represent a potential alternative source of input data in a plethora of meteorological and hydrological applications, especially in regions characterized by a low density of rain gauge stations. Paraguay provides a good example of a case where the use of satellite-based precipitation could be advantageous. This study aims to evaluate the version 7 of the Tropical Rainfall Measurement Mission Multi-Satellite Precipitation Analysis (TMPA V7; 3B42 V7) and the version 1.0 of the purely satellite-based product of the Climate Prediction Center Morphing Technique (CMORPH RAW) through their comparison with daily in situ precipitation measurements from 1998 to 2012 over Paraguay. The statistical assessment is conducted with several commonly used indexes. Specifically, to evaluate the accuracy of daily precipitation amounts, mean error (ME), root mean square error (RMSE), BIAS, and coefficient of determination (R 2) are used, and to analyze the capability to correctly detect different precipitation intensities, false alarm ratio (FAR), frequency bias index (FBI), and probability of detection (POD) are applied to various rainfall rates (0, 0.1, 0.5, 1, 2, 5, 10, 20, 40, 60, and 80 mm/day). Results indicate that TMPA V7 has a better performance than CMORPH RAW over Paraguay. TMPA V7 has higher accuracy in the estimation of daily rainfall volumes and greater precision in the detection of wet days (> 0 mm/day). However, both satellite products show a lower ability to appropriately detect high intensity precipitation events.

  2. Prevalence of HIV among MSM in Europe: comparison of self-reported diagnoses from a large scale internet survey and existing national estimates

    Directory of Open Access Journals (Sweden)

    Marcus Ulrich

    2012-11-01

    Full Text Available Abstract Background Country level comparisons of HIV prevalence among men having sex with men (MSM is challenging for a variety of reasons, including differences in the definition and measurement of the denominator group, recruitment strategies and the HIV detection methods. To assess their comparability, self-reported data on HIV diagnoses in a 2010 pan-European MSM internet survey (EMIS were compared with pre-existing estimates of HIV prevalence in MSM from a variety of European countries. Methods The first pan-European survey of MSM recruited more than 180,000 men from 38 countries across Europe and included questions on the year and result of last HIV test. HIV prevalence as measured in EMIS was compared with national estimates of HIV prevalence based on studies using biological measurements or modelling approaches to explore the degree of agreement between different methods. Existing estimates were taken from Dublin Declaration Monitoring Reports or UNAIDS country fact sheets, and were verified by contacting the nominated contact points for HIV surveillance in EU/EEA countries. Results The EMIS self-reported measurements of HIV prevalence were strongly correlated with existing estimates based on biological measurement and modelling studies using surveillance data (R2=0.70 resp. 0.72. In most countries HIV positive MSM appeared disproportionately likely to participate in EMIS, and prevalences as measured in EMIS are approximately twice the estimates based on existing estimates. Conclusions Comparison of diagnosed HIV prevalence as measured in EMIS with pre-existing estimates based on biological measurements using varied sampling frames (e.g. Respondent Driven Sampling, Time and Location Sampling demonstrates a high correlation and suggests similar selection biases from both types of studies. For comparison with modelled estimates the self-selection bias of the Internet survey with increased participation of men diagnosed with HIV has to be

  3. Exploiting Growing Stock Volume Maps for Large Scale Forest Resource Assessment: Cross-Comparisons of ASAR- and PALSAR-Based GSV Estimates with Forest Inventory in Central Siberia

    Directory of Open Access Journals (Sweden)

    Christian Hüttich

    2014-07-01

    Full Text Available Growing stock volume is an important biophysical parameter describing the state and dynamics of the Boreal zone. Validation of growing stock volume (GSV maps based on satellite remote sensing is challenging due to the lack of consistent ground reference data. The monitoring and assessment of the remote Russian forest resources of Siberia can only be done by integrating remote sensing techniques and interdisciplinary collaboration. In this paper, we assess the information content of GSV estimates in Central Siberian forests obtained at 25 m from ALOS-PALSAR and 1 km from ENVISAT-ASAR backscatter data. The estimates have been cross-compared with respect to forest inventory data showing 34% relative RMSE for the ASAR-based GSV retrievals and 39.4% for the PALSAR-based estimates of GSV. Fragmentation analyses using a MODIS-based land cover dataset revealed an increase of retrieval error with increasing fragmentation of the landscape. Cross-comparisons of multiple SAR-based GSV estimates helped to detect inconsistencies in the forest inventory data and can support an update of outdated forest inventory stands.

  4. Pareto-Optimal Estimates of California Precipitation Change

    Science.gov (United States)

    Langenbrunner, Baird; Neelin, J. David

    2017-12-01

    In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.

  5. Pareto-optimal estimates that constrain mean California precipitation change

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  6. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  7. Large-scale depositional characteristics of the Ulleung Basin and its impact on electrical resistivity and Archie-parameters for gas hydrate saturation estimates

    Science.gov (United States)

    Riedel, Michael; Collett, Timothy S.; Kim, H.-S.; Bahk, J.-J.; Kim, J.-H.; Ryu, B.-J.; Kim, G.-Y.

    2013-01-01

    Gas hydrate saturation estimates were obtained from an Archie-analysis of the Logging-While-Drilling (LWD) electrical resistivity logs under consideration of the regional geological framework of sediment deposition in the Ulleung Basin, East Sea, of Korea. Porosity was determined from the LWD bulk density log and core-derived values of grain density. In situ measurements of pore-fluid salinity as well as formation temperature define a background trend for pore-fluid resistivity at each drill site. The LWD data were used to define sets of empirical Archie-constants for different depth-intervals of the logged borehole at all sites drilled during the second Ulleung Basin Gas Hydrate Drilling Expedition (UBGH2). A clustering of data with distinctly different trend-lines is evident in the cross-plot of porosity and formation factor for all sites drilled during UBGH2. The reason for the clustering is related to the difference between hemipelagic sediments (mostly covering the top ∼100 mbsf) and mass-transport deposits (MTD) and/or the occurrence of biogenic opal. For sites located in the north-eastern portion of the Ulleung Basin a set of individual Archie-parameters for a shallow depth interval (hemipelagic) and a deeper MTD zone was achieved. The deeper zone shows typically higher resistivities for the same range of porosities seen in the upper zone, reflecting a shift in sediment properties. The presence of large amounts of biogenic opal (up to and often over 50% as defined by XRD data) was especially observed at Sites UBGH2-2_1 and UBGH2-2_2 (as well as UBGH1-9 from a previous drilling expedition in 2007). The boundary between these two zones can also easily be identified in gamma-ray logs, which also show unusually low readings in the opal-rich interval. Only by incorporating different Archie-parameters for the different zones a reasonable estimate of gas hydrate saturation was achieved that also matches results from other techniques such as pore-fluid freshening

  8. Estimation of the characteristic energy of electron precipitation

    Directory of Open Access Journals (Sweden)

    C. F. del Pozo

    2002-09-01

    Full Text Available Data from simultaneous observations (on 13 February 1996, 9 November 1998, and 12 February 1999 with the IRIS, DASI and EISCAT systems are employed in the study of the energy distribution of the electron precipitation during substorm activity. The estimation of the characteristic energy of the electron precipitation over the common field of view of IRIS and DASI is discussed. In particular, we look closely at the physical basis of the correspondence between the characteristic energy, the flux-averaged energy, as defined below, and the logarithm of the ratio of the green-light intensity to the square of absorption. This study expands and corrects results presented in the paper by Kosch et al. (2001. It is noticed, moreover, that acceleration associated with diffusion processes in the magnetosphere long before precipitation may be controlling the shape of the energy spectrum. We propose and test a "mixed" distribution for the energy-flux spectrum, exponential at the lower energies and Maxwellian or modified power-law at the higher energies, with a threshold energy separating these two regimes. The energy-flux spectrum at Tromsø, in the 1–320 keV range, is derived from EISCAT electron density profiles in the 70–140 km altitude range and is applied in the "calibration" of the optical intensity and absorption distributions, in order to extrapolate the flux and characteristic energy maps.Key words. Ionosphere (auroral ionosphere; particle precipitation; particle acceleration

  9. Estimation of the characteristic energy of electron precipitation

    Directory of Open Access Journals (Sweden)

    C. F. del Pozo

    Full Text Available Data from simultaneous observations (on 13 February 1996, 9 November 1998, and 12 February 1999 with the IRIS, DASI and EISCAT systems are employed in the study of the energy distribution of the electron precipitation during substorm activity. The estimation of the characteristic energy of the electron precipitation over the common field of view of IRIS and DASI is discussed. In particular, we look closely at the physical basis of the correspondence between the characteristic energy, the flux-averaged energy, as defined below, and the logarithm of the ratio of the green-light intensity to the square of absorption. This study expands and corrects results presented in the paper by Kosch et al. (2001. It is noticed, moreover, that acceleration associated with diffusion processes in the magnetosphere long before precipitation may be controlling the shape of the energy spectrum. We propose and test a "mixed" distribution for the energy-flux spectrum, exponential at the lower energies and Maxwellian or modified power-law at the higher energies, with a threshold energy separating these two regimes. The energy-flux spectrum at Tromsø, in the 1–320 keV range, is derived from EISCAT electron density profiles in the 70–140 km altitude range and is applied in the "calibration" of the optical intensity and absorption distributions, in order to extrapolate the flux and characteristic energy maps.

    Key words. Ionosphere (auroral ionosphere; particle precipitation; particle acceleration

  10. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  11. Uncertainty Estimation using Bootstrapped Kriging Predictions for Precipitation Isoscapes

    Science.gov (United States)

    Ma, C.; Bowen, G. J.; Vander Zanden, H.; Wunder, M.

    2017-12-01

    Isoscapes are spatial models representing the distribution of stable isotope values across landscapes. Isoscapes of hydrogen and oxygen in precipitation are now widely used in a diversity of fields, including geology, biology, hydrology, and atmospheric science. To generate isoscapes, geostatistical methods are typically applied to extend predictions from limited data measurements. Kriging is a popular method in isoscape modeling, but quantifying the uncertainty associated with the resulting isoscapes is challenging. Applications that use precipitation isoscapes to determine sample origin require estimation of uncertainty. Here we present a simple bootstrap method (SBM) to estimate the mean and uncertainty of the krigged isoscape and compare these results with a generalized bootstrap method (GBM) applied in previous studies. We used hydrogen isotopic data from IsoMAP to explore these two approaches for estimating uncertainty. We conducted 10 simulations for each bootstrap method and found that SBM results in more kriging predictions (9/10) compared to GBM (4/10). Prediction from SBM was closer to the original prediction generated without bootstrapping and had less variance than GBM. SBM was tested on different datasets from IsoMAP with different numbers of observation sites. We determined that predictions from the datasets with fewer than 40 observation sites using SBM were more variable than the original prediction. The approaches we used for estimating uncertainty will be compiled in an R package that is under development. We expect that these robust estimates of precipitation isoscape uncertainty can be applied in diagnosing the origin of samples ranging from various type of waters to migratory animals, food products, and humans.

  12. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  13. The estimation of probable maximum precipitation: the case of Catalonia.

    Science.gov (United States)

    Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel

    2008-12-01

    A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.

  14. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  15. Results of large scale wind climatologically estimations

    Directory of Open Access Journals (Sweden)

    Andrea Kircsi

    2008-05-01

    Full Text Available The aim of this article is to describe theparticular field of climatology which analyzes airmovement characteristics regarding utilization of windfor energy generation. The article describes features ofwind energy potential available in Hungary compared towind conditions in other areas of the northern quartersphere in order to assist the wind energy use developmentin Hungary. Information on wind climate gives a solidbasis for financial and economic decisions ofstakeholders in the field of wind energy utilization.

  16. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  17. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  18. GPS Estimates of Integrated Precipitable Water Aid Weather Forecasters

    Science.gov (United States)

    Moore, Angelyn W.; Gutman, Seth I.; Holub, Kirk; Bock, Yehuda; Danielson, David; Laber, Jayme; Small, Ivory

    2013-01-01

    Global Positioning System (GPS) meteorology provides enhanced density, low-latency (30-min resolution), integrated precipitable water (IPW) estimates to NOAA NWS (National Oceanic and Atmospheric Adminis tration Nat ional Weather Service) Weather Forecast Offices (WFOs) to provide improved model and satellite data verification capability and more accurate forecasts of extreme weather such as flooding. An early activity of this project was to increase the number of stations contributing to the NOAA Earth System Research Laboratory (ESRL) GPS meteorology observing network in Southern California by about 27 stations. Following this, the Los Angeles/Oxnard and San Diego WFOs began using the enhanced GPS-based IPW measurements provided by ESRL in the 2012 and 2013 monsoon seasons. Forecasters found GPS IPW to be an effective tool in evaluating model performance, and in monitoring monsoon development between weather model runs for improved flood forecasting. GPS stations are multi-purpose, and routine processing for position solutions also yields estimates of tropospheric zenith delays, which can be converted into mm-accuracy PWV (precipitable water vapor) using in situ pressure and temperature measurements, the basis for GPS meteorology. NOAA ESRL has implemented this concept with a nationwide distribution of more than 300 "GPSMet" stations providing IPW estimates at sub-hourly resolution currently used in operational weather models in the U.S.

  19. Antecedent precipitation index determined from CST estimates of rainfall

    Science.gov (United States)

    Martin, David W.

    1992-01-01

    This paper deals with an experimental calculation of a satellite-based antecedent precipitation index (API). The index is also derived from daily rain images produced from infrared images using an improved version of GSFC's Convective/Stratiform Technique (CST). API is a measure of soil moisture, and is based on the notion that the amount of moisture in the soil at a given time is related to precipitation at earlier times. Four different CST programs as well as the Geostationary Operational Enviroment Satellite (GOES) Precipitation Index developed by Arkin in 1979 are compared to experimental results, for the Mississippi Valley during the month of July. Rain images are shown for the best CST code and the ARK program. Comparisons are made as to the accuracy and detail of the results for the two codes. This project demonstrates the feasibility of running the CST on a synoptic scale. The Mississippi Valley case is well suited for testing the feasibility of monitoring soil moisture by means of CST. Preliminary comparisons of CST and ARK indicate significant differences in estimates of rain amount and distribution.

  20. Large scale biomimetic membrane arrays

    DEFF Research Database (Denmark)

    Hansen, Jesper Søndergaard; Perry, Mark; Vogel, Jörg

    2009-01-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO2 laser micro......-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 mu m. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane...... peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays...

  1. Conference on Large Scale Optimization

    CERN Document Server

    Hearn, D; Pardalos, P

    1994-01-01

    On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abro...

  2. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  3. Large-scale river regulation

    International Nuclear Information System (INIS)

    Petts, G.

    1994-01-01

    Recent concern over human impacts on the environment has tended to focus on climatic change, desertification, destruction of tropical rain forests, and pollution. Yet large-scale water projects such as dams, reservoirs, and inter-basin transfers are among the most dramatic and extensive ways in which our environment has been, and continues to be, transformed by human action. Water running to the sea is perceived as a lost resource, floods are viewed as major hazards, and wetlands are seen as wastelands. River regulation, involving the redistribution of water in time and space, is a key concept in socio-economic development. To achieve water and food security, to develop drylands, and to prevent desertification and drought are primary aims for many countries. A second key concept is ecological sustainability. Yet the ecology of rivers and their floodplains is dependent on the natural hydrological regime, and its related biochemical and geomorphological dynamics. (Author)

  4. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  5. Concepts for Large Scale Hydrogen Production

    OpenAIRE

    Jakobsen, Daniel; Åtland, Vegar

    2016-01-01

    The objective of this thesis is to perform a techno-economic analysis of large-scale, carbon-lean hydrogen production in Norway, in order to evaluate various production methods and estimate a breakeven price level. Norway possesses vast energy resources and the export of oil and gas is vital to the country s economy. The results of this thesis indicate that hydrogen represents a viable, carbon-lean opportunity to utilize these resources, which can prove key in the future of Norwegian energy e...

  6. Global Precipitation Measurement (GPM) Core Observatory Falling Snow Estimates

    Science.gov (United States)

    Skofronick Jackson, G.; Kulie, M.; Milani, L.; Munchak, S. J.; Wood, N.; Levizzani, V.

    2017-12-01

    Retrievals of falling snow from space represent an important data set for understanding and linking the Earth's atmospheric, hydrological, and energy cycles. Estimates of falling snow must be captured to obtain the true global precipitation water cycle, snowfall accumulations are required for hydrological studies, and without knowledge of the frozen particles in clouds one cannot adequately understand the energy and radiation budgets. This work focuses on comparing the first stable falling snow retrieval products (released May 2017) for the Global Precipitation Measurement (GPM) Core Observatory (GPM-CO), which was launched February 2014, and carries both an active dual frequency (Ku- and Ka-band) precipitation radar (DPR) and a passive microwave radiometer (GPM Microwave Imager-GMI). Five separate GPM-CO falling snow retrieval algorithm products are analyzed including those from DPR Matched (Ka+Ku) Scan, DPR Normal Scan (Ku), DPR High Sensitivity Scan (Ka), combined DPR+GMI, and GMI. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new, the different on-orbit instruments don't capture all snow rates equally, and retrieval algorithms differ. Thus a detailed comparison among the GPM-CO products elucidates advantages and disadvantages of the retrievals. GPM and CloudSat global snowfall evaluation exercises are natural investigative pathways to explore, but caution must be undertaken when analyzing these datasets for comparative purposes. This work includes outlining the challenges associated with comparing GPM-CO to CloudSat satellite snow estimates due to the different sampling, algorithms, and instrument capabilities. We will highlight some factors and assumptions that can be altered or statistically normalized and applied in an effort to make comparisons between GPM and CloudSat global satellite falling snow products as equitable as possible.

  7. Combining Radar and Daily Precipitation Data to Estimate Meaningful Sub-daily Precipitation Extremes

    Science.gov (United States)

    Pegram, G. G. S.; Bardossy, A.

    2016-12-01

    Short duration extreme rainfalls are important for design. The purpose of this presentation is not to improve the day by day estimation of precipitation, but to obtain reasonable statistics for the subdaily extremes at gauge locations. We are interested specifically in daily and sub-daily extreme values of precipitation at gauge locations. We do not employ the common procedure of using time series of control station to determine the missing data values in a target. We are interested in individual rare events, not sequences. The idea is to use radar to disaggregate daily totals to sub-daily amounts. In South Arica, an S-band radar operated relatively continuously at Bethlehem from 1998 to 2003, whose scan at 1.5 km above ground [CAPPI] overlapped a dense (10 km spacing) set of 45 pluviometers recording in the same 6-year period. Using this valuable set of data, we are only interested in rare extremes, therefore small to medium values of rainfall depth were neglected, leaving 12 days of ranked daily maxima in each set per year, whose sum typically comprised about 50% of each annual rainfall total. The method presented here uses radar for disaggregating daily gauge totals in subdaily intervals down to 15 minutes in order to extract the maxima of sub-hourly through to daily rainfall at each of 37 selected radar pixels [1 km square in plan] which contained one of the 45 pluviometers not masked out by the radar foot-print. The pluviometer data were aggregated to daily totals, to act as if they were daily read gauges; their only other task was to help in the cross-validation exercise. The extrema were obtained as quantiles by ordering the 12 daily maxima of each interval per year. The unusual and novel goal was not to obtain the reproduction of the precipitation matching in space and time, but to obtain frequency distributions of the gauge and radar extremes, by matching their ranks, which we found to be stable and meaningful in cross-validation tests. We provide and

  8. Reviving large-scale projects

    International Nuclear Information System (INIS)

    Desiront, A.

    2003-01-01

    For the past decade, most large-scale hydro development projects in northern Quebec have been put on hold due to land disputes with First Nations. Hydroelectric projects have recently been revived following an agreement signed with Aboriginal communities in the province who recognized the need to find new sources of revenue for future generations. Many Cree are working on the project to harness the waters of the Eastmain River located in the middle of their territory. The work involves building an 890 foot long dam, 30 dikes enclosing a 603 square-km reservoir, a spillway, and a power house with 3 generating units with a total capacity of 480 MW of power for start-up in 2007. The project will require the use of 2,400 workers in total. The Cree Construction and Development Company is working on relations between Quebec's 14,000 Crees and the James Bay Energy Corporation, the subsidiary of Hydro-Quebec which is developing the project. Approximately 10 per cent of the $735-million project has been designated for the environmental component. Inspectors ensure that the project complies fully with environmental protection guidelines. Total development costs for Eastmain-1 are in the order of $2 billion of which $735 million will cover work on site and the remainder will cover generating units, transportation and financial charges. Under the treaty known as the Peace of the Braves, signed in February 2002, the Quebec government and Hydro-Quebec will pay the Cree $70 million annually for 50 years for the right to exploit hydro, mining and forest resources within their territory. The project comes at a time when electricity export volumes to the New England states are down due to growth in Quebec's domestic demand. Hydropower is a renewable and non-polluting source of energy that is one of the most acceptable forms of energy where the Kyoto Protocol is concerned. It was emphasized that large-scale hydro-electric projects are needed to provide sufficient energy to meet both

  9. Large Scale Glazed Concrete Panels

    DEFF Research Database (Denmark)

    Bache, Anja Margrethe

    2010-01-01

    Today, there is a lot of focus on concrete surface’s aesthitic potential, both globally and locally. World famous architects such as Herzog De Meuron, Zaha Hadid, Richard Meyer and David Chippenfield challenge the exposure of concrete in their architecture. At home, this trend can be seen...... in the crinkly façade of DR-Byen (the domicile of the Danish Broadcasting Company) by architect Jean Nouvel and Zaha Hadid’s Ordrupgård’s black curved smooth concrete surfaces. Furthermore, one can point to initiatives such as “Synlig beton” (visible concrete) that can be seen on the website www.......synligbeton.dk and spæncom’s aesthetic relief effects by the designer Line Kramhøft (www.spaencom.com). It is my hope that the research-development project “Lasting large scale glazed concrete formwork,” I am working on at DTU, department of Architectural Engineering will be able to complement these. It is a project where I...

  10. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  11. Large scale cross hole testing

    International Nuclear Information System (INIS)

    Ball, J.K.; Black, J.H.; Doe, T.

    1991-05-01

    As part of the Site Characterisation and Validation programme the results of the large scale cross hole testing have been used to document hydraulic connections across the SCV block, to test conceptual models of fracture zones and obtain hydrogeological properties of the major hydrogeological features. The SCV block is highly heterogeneous. This heterogeneity is not smoothed out even over scales of hundreds of meters. Results of the interpretation validate the hypothesis of the major fracture zones, A, B and H; not much evidence of minor fracture zones is found. The uncertainty in the flow path, through the fractured rock, causes sever problems in interpretation. Derived values of hydraulic conductivity were found to be in a narrow range of two to three orders of magnitude. Test design did not allow fracture zones to be tested individually. This could be improved by testing the high hydraulic conductivity regions specifically. The Piezomac and single hole equipment worked well. Few, if any, of the tests ran long enough to approach equilibrium. Many observation boreholes showed no response. This could either be because there is no hydraulic connection, or there is a connection but a response is not seen within the time scale of the pumping test. The fractional dimension analysis yielded credible results, and the sinusoidal testing procedure provided an effective means of identifying the dominant hydraulic connections. (10 refs.) (au)

  12. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  13. Similarities and Improvements of GPM Dual-Frequency Precipitation Radar (DPR upon TRMM Precipitation Radar (PR in Global Precipitation Rate Estimation, Type Classification and Vertical Profiling

    Directory of Open Access Journals (Sweden)

    Jinyu Gao

    2017-11-01

    Full Text Available Spaceborne precipitation radars are powerful tools used to acquire adequate and high-quality precipitation estimates with high spatial resolution for a variety of applications in hydrological research. The Global Precipitation Measurement (GPM mission, which deployed the first spaceborne Ka- and Ku-dual frequency radar (DPR, was launched in February 2014 as the upgraded successor of the Tropical Rainfall Measuring Mission (TRMM. This study matches the swath data of TRMM PR and GPM DPR Level 2 products during their overlapping periods at the global scale to investigate their similarities and DPR’s improvements concerning precipitation amount estimation and type classification of GPM DPR over TRMM PR. Results show that PR and DPR agree very well with each other in the global distribution of precipitation, while DPR improves the detectability of precipitation events significantly, particularly for light precipitation. The occurrences of total precipitation and the light precipitation (rain rates < 1 mm/h detected by GPM DPR are ~1.7 and ~2.53 times more than that of PR. With regard to type classification, the dual-frequency (Ka/Ku and single frequency (Ku methods performed similarly. In both inner (the central 25 beams and outer swaths (1–12 beams and 38–49 beams of DPR, the results are consistent. GPM DPR improves precipitation type classification remarkably, reducing the misclassification of clouds and noise signals as precipitation type “other” from 10.14% of TRMM PR to 0.5%. Generally, GPM DPR exhibits the same type division for around 82.89% (71.02% of stratiform (convective precipitation events recognized by TRMM PR. With regard to the freezing level height and bright band (BB height, both radars correspond with each other very well, contributing to the consistency in stratiform precipitation classification. Both heights show clear latitudinal dependence. Results in this study shall contribute to future development of spaceborne

  14. Large-scale galaxy bias

    Science.gov (United States)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  15. Large-scale galaxy bias

    Science.gov (United States)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  16. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    Science.gov (United States)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially

  17. Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar

    Science.gov (United States)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by

  18. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  19. Global Precipitation Measurement. Report 7; Bridging from TRMM to GPM to 3-Hourly Precipitation Estimates

    Science.gov (United States)

    Shepherd, J. Marshall; Smith, Eric A.; Adams, W. James (Editor)

    2002-01-01

    Historically, multi-decadal measurements of precipitation from surface-based rain gauges have been available over continents. However oceans remained largely unobserved prior to the beginning of the satellite era. Only after the launch of the first Defense Meteorological Satellite Program (DMSP) satellite in 1987 carrying a well-calibrated and multi-frequency passive microwave radiometer called Special Sensor Microwave/Imager (SSM/I) have systematic and accurate precipitation measurements over oceans become available on a regular basis; see Smith et al. (1994, 1998). Recognizing that satellite-based data are a foremost tool for measuring precipitation, NASA initiated a new research program to measure precipitation from space under its Mission to Planet Earth program in the 1990s. As a result, the Tropical Rainfall Measuring Mission (TRMM), a collaborative mission between NASA and NASDA, was launched in 1997 to measure tropical and subtropical rain. See Simpson et al. (1996) and Kummerow et al. (2000). Motivated by the success of TRMM, and recognizing the need for more comprehensive global precipitation measurements, NASA and NASDA have now planned a new mission, i.e., the Global Precipitation Measurement (GPM) mission. The primary goal of GPM is to extend TRMM's rainfall time series while making substantial improvements in precipitation observations, specifically in terms of measurement accuracy, sampling frequency, Earth coverage, and spatial resolution. This report addresses four fundamental questions related to the transition from current to future global precipitation observations as denoted by the TRMM and GPM eras, respectively.

  20. Enhancement of regional wet deposition estimates based on modeled precipitation inputs

    Science.gov (United States)

    James A. Lynch; Jeffery W. Grimm; Edward S. Corbett

    1996-01-01

    Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....

  1. Contributions of Precipitation and Soil Moisture Observations to the Skill of Soil Moisture Estimates in a Land Data Assimilation System

    Science.gov (United States)

    Reichle, Rolf H.; Liu, Qing; Bindlish, Rajat; Cosh, Michael H.; Crow, Wade T.; deJeu, Richard; DeLannoy, Gabrielle J. M.; Huffman, George J.; Jackson, Thomas J.

    2011-01-01

    The contributions of precipitation and soil moisture observations to the skill of soil moisture estimates from a land data assimilation system are assessed. Relative to baseline estimates from the Modern Era Retrospective-analysis for Research and Applications (MERRA), the study investigates soil moisture skill derived from (i) model forcing corrections based on large-scale, gauge- and satellite-based precipitation observations and (ii) assimilation of surface soil moisture retrievals from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E). Soil moisture skill is measured against in situ observations in the continental United States at 44 single-profile sites within the Soil Climate Analysis Network (SCAN) for which skillful AMSR-E retrievals are available and at four CalVal watersheds with high-quality distributed sensor networks that measure soil moisture at the scale of land model and satellite estimates. The average skill (in terms of the anomaly time series correlation coefficient R) of AMSR-E retrievals is R=0.39 versus SCAN and R=0.53 versus CalVal measurements. The skill of MERRA surface and root-zone soil moisture is R=0.42 and R=0.46, respectively, versus SCAN measurements, and MERRA surface moisture skill is R=0.56 versus CalVal measurements. Adding information from either precipitation observations or soil moisture retrievals increases surface soil moisture skill levels by IDDeltaR=0.06-0.08, and root zone soil moisture skill levels by DeltaR=0.05-0.07. Adding information from both sources increases surface soil moisture skill levels by DeltaR=0.13, and root zone soil moisture skill by DeltaR=0.11, demonstrating that precipitation corrections and assimilation of satellite soil moisture retrievals contribute similar and largely independent amounts of information.

  2. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  3. Comparison of direct and precipitation methods for the estimation of ...

    African Journals Online (AJOL)

    Background: There is increase in use of direct assays for analysis of high and low density lipoprotein cholesterol by clinical laboratories despite differences in performance characteristics with conventional precipitation methods. Calculation of low density lipoprotein cholesterol in precipitation methods is based on total ...

  4. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  5. Storm induced large scale TIDs observed in GPS derived TEC

    Directory of Open Access Journals (Sweden)

    C. Borries

    2009-04-01

    Full Text Available This work is a first statistical analysis of large scale traveling ionospheric disturbances (LSTID in Europe using total electron content (TEC data derived from GNSS measurements. The GNSS receiver network in Europe is dense enough to map the ionospheric perturbation TEC with high horizontal resolution. The derived perturbation TEC maps are analysed studying the effect of space weather events on the ionosphere over Europe.

    Equatorward propagating storm induced wave packets have been identified during several geomagnetic storms. Characteristic parameters such as velocity, wavelength and direction were estimated from the perturbation TEC maps. Showing a mean wavelength of 2000 km, a mean period of 59 min and a phase speed of 684 ms−1 in average, the perturbations are allocated to LSTID. The comparison to LSTID observed over Japan shows an equal wavelength but a considerably faster phase speed. This might be attributed to the differences in the distance to the auroral region or inclination/declination of the geomagnetic field lines.

    The observed correlation between the LSTID amplitudes and the Auroral Electrojet (AE indicates that most of the wave like perturbations are exited by Joule heating. Particle precipitation effects could not be separated.

  6. Storm induced large scale TIDs observed in GPS derived TEC

    Directory of Open Access Journals (Sweden)

    C. Borries

    2009-04-01

    Full Text Available This work is a first statistical analysis of large scale traveling ionospheric disturbances (LSTID in Europe using total electron content (TEC data derived from GNSS measurements. The GNSS receiver network in Europe is dense enough to map the ionospheric perturbation TEC with high horizontal resolution. The derived perturbation TEC maps are analysed studying the effect of space weather events on the ionosphere over Europe. Equatorward propagating storm induced wave packets have been identified during several geomagnetic storms. Characteristic parameters such as velocity, wavelength and direction were estimated from the perturbation TEC maps. Showing a mean wavelength of 2000 km, a mean period of 59 min and a phase speed of 684 ms−1 in average, the perturbations are allocated to LSTID. The comparison to LSTID observed over Japan shows an equal wavelength but a considerably faster phase speed. This might be attributed to the differences in the distance to the auroral region or inclination/declination of the geomagnetic field lines. The observed correlation between the LSTID amplitudes and the Auroral Electrojet (AE indicates that most of the wave like perturbations are exited by Joule heating. Particle precipitation effects could not be separated.

  7. Merging Radar Quantitative Precipitation Estimates (QPEs) from the High-resolution NEXRAD Reanalysis over CONUS with Rain-gauge Observations

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.

    2015-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.

  8. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  9. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  10. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  11. A spatial approach to the modelling and estimation of areal precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Skaugen, T

    1996-12-31

    In hydroelectric power technology it is important that the mean precipitation that falls in an area can be calculated. This doctoral thesis studies how the morphology of rainfall, described by the spatial statistical parameters, can be used to improve interpolation and estimation procedures. It attempts to formulate a theory which includes the relations between the size of the catchment and the size of the precipitation events in the modelling of areal precipitation. The problem of estimating and modelling areal precipitation can be formulated as the problem of estimating an inhomogeneously distributed flux of a certain spatial extent being measured at points in a randomly placed domain. The information contained in the different morphology of precipitation types is used to improve estimation procedures of areal precipitation, by interpolation (kriging) or by constructing areal reduction factors. A new approach to precipitation modelling is introduced where the analysis of the spatial coverage of precipitation at different intensities plays a key role in the formulation of a stochastic model for extreme areal precipitation and in deriving the probability density function of areal precipitation. 127 refs., 30 figs., 13 tabs.

  12. Sensitivity technologies for large scale simulation

    International Nuclear Information System (INIS)

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  13. Case Study: Commercialization of sweet sorghum juice clarification for large-scale syrup manufacture

    Science.gov (United States)

    The precipitation and burning of insoluble granules of starch from sweet sorghum juice on heating coils prevented the large scale manufacture of syrup at a new industrial plant in Missouri, USA. To remove insoluble starch granules, a series of small and large-scale experiments were conducted at the...

  14. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  15. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  16. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  17. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  18. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  19. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  20. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  1. Evaluating Satellite Products for Precipitation Estimation in Mountain Regions: A Case Study for Nepal

    Directory of Open Access Journals (Sweden)

    Tarendra Lakhankar

    2013-08-01

    Full Text Available Precipitation in mountain regions is often highly variable and poorly observed, limiting abilities to manage water resource challenges. Here, we evaluate remote sensing and ground station-based gridded precipitation products over Nepal against weather station precipitation observations on a monthly timescale. We find that the Tropical Rainfall Measuring Mission (TRMM 3B-43 precipitation product exhibits little mean bias and reasonable skill in giving precipitation over Nepal. Compared to station observations, the TRMM precipitation product showed an overall Nash-Sutcliffe efficiency of 0.49, which is similar to the skill of the gridded station-based product Asian Precipitation-Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE. The other satellite precipitation products considered (Global Satellite Mapping of Precipitation (GSMaP, the Climate Prediction Center Morphing technique (CMORPH, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS were less skillful, as judged by Nash-Sutcliffe efficiency, and, on average, substantially underestimated precipitation compared to station observations, despite their, in some cases, higher nominal spatial resolution compared to TRMM. None of the products fully captured the dependence of mean precipitation on elevation seen in the station observations. Overall, the TRMM product is promising for use in water resources applications.

  2. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  3. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  4. Large-scale Homogenization of Bulk Materials in Mammoth Silos

    NARCIS (Netherlands)

    Schott, D.L.

    2004-01-01

    This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the

  5. Precipitation Estimation Using L-Band and C-Band Soil Moisture Retrievals

    Science.gov (United States)

    Koster, Randal D.; Brocca, Luca; Crow, Wade T.; Burgin, Mariko S.; De Lannoy, Gabrielle J. M.

    2016-01-01

    An established methodology for estimating precipitation amounts from satellite-based soil moisture retrievals is applied to L-band products from the Soil Moisture Active Passive (SMAP) and Soil Moisture and Ocean Salinity (SMOS) satellite missions and to a C-band product from the Advanced Scatterometer (ASCAT) mission. The precipitation estimates so obtained are evaluated against in situ (gauge-based) precipitation observations from across the globe. The precipitation estimation skill achieved using the L-band SMAP and SMOS data sets is higher than that obtained with the C-band product, as might be expected given that L-band is sensitive to a thicker layer of soil and thereby provides more information on the response of soil moisture to precipitation. The square of the correlation coefficient between the SMAP-based precipitation estimates and the observations (for aggregations to approximately100 km and 5 days) is on average about 0.6 in areas of high rain gauge density. Satellite missions specifically designed to monitor soil moisture thus do provide significant information on precipitation variability, information that could contribute to efforts in global precipitation estimation.

  6. Estimating mountain basin-mean precipitation from streamflow using Bayesian inference

    Science.gov (United States)

    Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Lundquist, Jessica D.

    2015-10-01

    Estimating basin-mean precipitation in complex terrain is difficult due to uncertainty in the topographical representativeness of precipitation gauges relative to the basin. To address this issue, we use Bayesian methodology coupled with a multimodel framework to infer basin-mean precipitation from streamflow observations, and we apply this approach to snow-dominated basins in the Sierra Nevada of California. Using streamflow observations, forcing data from lower-elevation stations, the Bayesian Total Error Analysis (BATEA) methodology and the Framework for Understanding Structural Errors (FUSE), we infer basin-mean precipitation, and compare it to basin-mean precipitation estimated using topographically informed interpolation from gauges (PRISM, the Parameter-elevation Regression on Independent Slopes Model). The BATEA-inferred spatial patterns of precipitation show agreement with PRISM in terms of the rank of basins from wet to dry but differ in absolute values. In some of the basins, these differences may reflect biases in PRISM, because some implied PRISM runoff ratios may be inconsistent with the regional climate. We also infer annual time series of basin precipitation using a two-step calibration approach. Assessment of the precision and robustness of the BATEA approach suggests that uncertainty in the BATEA-inferred precipitation is primarily related to uncertainties in hydrologic model structure. Despite these limitations, time series of inferred annual precipitation under different model and parameter assumptions are strongly correlated with one another, suggesting that this approach is capable of resolving year-to-year variability in basin-mean precipitation.

  7. Impact of time displaced precipitation estimates for on-line updated models

    DEFF Research Database (Denmark)

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2012-01-01

    When an online runoff model is updated from system measurements the requirements to the precipitation estimates change. Using rain gauge data as precipitation input there will be a displacement between the time where the rain intensity hits the gauge and the time where the rain hits the actual...

  8. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  9. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  10. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  11. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  12. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  13. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  14. Configuration management in large scale infrastructure development

    NARCIS (Netherlands)

    Rijn, T.P.J. van; Belt, H. van de; Los, R.H.

    2000-01-01

    Large Scale Infrastructure (LSI) development projects such as the construction of roads, rail-ways and other civil engineering (water)works is tendered differently today than a decade ago. Traditional workflow requested quotes from construction companies for construction works where the works to be

  15. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  16. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  17. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  18. The origin of large scale cosmic structure

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Palmer, P.L.

    1985-01-01

    The paper concerns the origin of large scale cosmic structure. The evolution of density perturbations, the nonlinear regime (Zel'dovich's solution and others), the Gott and Rees clustering hierarchy, the spectrum of condensations, and biassed galaxy formation, are all discussed. (UK)

  19. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  20. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  1. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  2. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  3. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  4. Stability of large scale interconnected dynamical systems

    International Nuclear Information System (INIS)

    Akpan, E.P.

    1993-07-01

    Large scale systems modelled by a system of ordinary differential equations are considered and necessary and sufficient conditions are obtained for the uniform asymptotic connective stability of the systems using the method of cone-valued Lyapunov functions. It is shown that this model significantly improves the existing models. (author). 9 refs

  5. Rainfall estimation in SWAT: An alternative method to simulate orographic precipitation

    Science.gov (United States)

    Galván, L.; Olías, M.; Izquierdo, T.; Cerón, J. C.; Fernández de Villarán, R.

    2014-02-01

    The input of water from precipitation is one of the most important aspects of a hydrologic model because it controls the basin's water budget. The model should reproduce the amount and distribution of rainfall in the basin, spatially and temporally. SWAT (Soil and Water Assessment Tool) is one of the most widely used hydrologic models. In this paper the rainfall estimation in SWAT is revised, focusing on the treatment of orographic precipitation. SWAT was applied to the Odiel river basin (SW Spain), with a surface of 2300 km2. Results show that SWAT does not reflect reallisticaly the spatial distribution of rainfall in the basin. In relation to orographic precipitation, SWAT estimates the daily precipitation in elevation bands by adding a constant amount to the recorded precipitation in the rain gauge, which depends on the increase in precipitation with altitude and the difference between the mean elevation of each band and the elevation of the recording gauge. This does not reflect rainfall in the subbasin because the increase in precipitation with altitude actually it is not constant, but depends on the amount of rainfall. An alternative methodology to represent the temporal distribution of orographic precipitation is proposed. After simulation, the deviation of runoff volume using the SWAT elevation bands was appreciably higher than that obtained with the proposed methodology.

  6. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  7. The impact of reflectivity correction and accounting for raindrop size distribution variability to improve precipitation estimation by weather radar for an extreme low-land mesoscale convective system

    Science.gov (United States)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-11-01

    disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  8. Long-Term Precipitation Analysis and Estimation of Precipitation Concentration Index Using Three Support Vector Machine Methods

    Directory of Open Access Journals (Sweden)

    Milan Gocic

    2016-01-01

    Full Text Available The monthly precipitation data from 29 stations in Serbia during the period of 1946–2012 were considered. Precipitation trends were calculated using linear regression method. Three CLINO periods (1961–1990, 1971–2000, and 1981–2010 in three subregions were analysed. The CLINO 1981–2010 period had a significant increasing trend. Spatial pattern of the precipitation concentration index (PCI was presented. For the purpose of PCI prediction, three Support Vector Machine (SVM models, namely, SVM coupled with the discrete wavelet transform (SVM-Wavelet, the firefly algorithm (SVM-FFA, and using the radial basis function (SVM-RBF, were developed and used. The estimation and prediction results of these models were compared with each other using three statistical indicators, that is, root mean square error, coefficient of determination, and coefficient of efficiency. The experimental results showed that an improvement in predictive accuracy and capability of generalization can be achieved by the SVM-Wavelet approach. Moreover, the results indicated the proposed SVM-Wavelet model can adequately predict the PCI.

  9. Large-scale nuclear energy from the thorium cycle

    International Nuclear Information System (INIS)

    Lewis, W.B.; Duret, M.F.; Craig, D.S.; Veeder, J.I.; Bain, A.S.

    1973-02-01

    The thorium fuel cycle in CANDU (Canada Deuterium Uranium) reactors challenges breeders and fusion as the simplest means of meeting the world's large-scale demands for energy for centuries. Thorium oxide fuel allows high power density with excellent neutron economy. The combination of thorium fuel with organic caloporteur promises easy maintenance and high availability of the whole plant. The total fuelling cost including charges on the inventory is estimated to be attractively low. (author) [fr

  10. Evolutionary leap in large-scale flood risk assessment needed

    OpenAIRE

    Vorogushyn, Sergiy; Bates, Paul D.; de Bruijn, Karin; Castellarin, Attilio; Kreibich, Heidi; Priest, Sally J.; Schröter, Kai; Bagli, Stefano; Blöschl, Günter; Domeneghetti, Alessio; Gouldby, Ben; Klijn, Frans; Lammersen, Rita; Neal, Jeffrey C.; Ridder, Nina

    2018-01-01

    Current approaches for assessing large-scale flood risks contravene the fundamental principles of the flood risk system functioning because they largely ignore basic interactions and feedbacks between atmosphere, catchments, river-floodplain systems and socio-economic processes. As a consequence, risk analyses are uncertain and might be biased. However, reliable risk estimates are required for prioritizing national investments in flood risk mitigation or for appraisal and management of insura...

  11. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  12. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  13. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  14. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  15. Large scale inhomogeneities and the cosmological principle

    International Nuclear Information System (INIS)

    Lukacs, B.; Meszaros, A.

    1984-12-01

    The compatibility of cosmologic principles and possible large scale inhomogeneities of the Universe is discussed. It seems that the strongest symmetry principle which is still compatible with reasonable inhomogeneities, is a full conformal symmetry in the 3-space defined by the cosmological velocity field, but even in such a case, the standard model is isolated from the inhomogeneous ones when the whole evolution is considered. (author)

  16. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  17. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  18. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  19. LAVA: Large scale Automated Vulnerability Addition

    Science.gov (United States)

    2016-05-23

    LAVA: Large-scale Automated Vulnerability Addition Brendan Dolan -Gavitt∗, Patrick Hulin†, Tim Leek†, Fredrich Ulrich†, Ryan Whelan† (Authors listed...released, and thus rapidly become stale. We can expect tools to have been trained to detect bugs that have been released. Given the commercial price tag...low TCN) and dead (low liveness) program data is a powerful one for vulnera- bility injection. The DUAs it identifies are internal program quantities

  20. Large-Scale Transit Signal Priority Implementation

    OpenAIRE

    Lee, Kevin S.; Lozner, Bailey

    2018-01-01

    In 2016, the District Department of Transportation (DDOT) deployed Transit Signal Priority (TSP) at 195 intersections in highly urbanized areas of Washington, DC. In collaboration with a broader regional implementation, and in partnership with the Washington Metropolitan Area Transit Authority (WMATA), DDOT set out to apply a systems engineering–driven process to identify, design, test, and accept a large-scale TSP system. This presentation will highlight project successes and lessons learned.

  1. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  2. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  3. REAL - Ensemble radar precipitation estimation for hydrology in a mountainous region

    OpenAIRE

    Germann, Urs; Berenguer Ferrer, Marc; Sempere Torres, Daniel; Zappa, Massimiliano

    2009-01-01

    An elegant solution to characterise the residual errors in radar precipitation estimates is to generate an ensemble of precipitation fields. The paper proposes a radar ensemble generator designed for usage in the Alps using LU decomposition (REAL), and presents first results from a real-time implementation coupling the radar ensemble with a semi-distributed rainfall–runoff model for flash flood modelling in a steep Alpine catchment. Each member of the radar ensemble is a possible realisati...

  4. GPM SLH: Convective Latent Heating Estimated with GPM Dual-frequency Precipitation Radar Data

    Science.gov (United States)

    Takayabu, Y. N.; Hamada, A.; Yokoyama, C.; Ikuta, Y.; Shige, S.; Yamaji, M.; Kubota, T.

    2017-12-01

    Three dimensional diabatic heating distribution plays essential roles to determine large-scale circulation, as well as to generate mesoscale circulation associated with tropical convection (e.g. Hartmann et al., 1984; Houze et al. 1982). For mid-latitude systems also, diabatic heating contributes to generate PVs resulting in, for example, explosive intensifications of mid-lattitude storms (Boettcher and Wernli, 2011). Previously, with TRMM PR data, we developed a Spectral Latent Heating algorithm (SLH; Shige et al. 2004, etc.) for 36N-36S region. It was based on the spectral LH tables produced from a simulation utilizing the Goddard Cloud Ensemble Model forced with the TOGA-COARE data. With GPM DPR, the observation region is extended to 65N-65S. Here, we introduce a new version of SLH algorithm which is applicable also to the mid-latitude precipitation. A new global GPM SLH ver.5 product is released as one of NASA/JAXA GPM standard products on July 11, 2017. For GPM SLH mid-latitude algorithm, we employ the Japan Meteorological Agency (JMA)'s high resolution (horizontally 2km) Local Forecast Model (LFM) to construct the LUTs. With collaborations of JMA's forecast group, forecast data for 8 extratropical cyclone cases are collected and utilized. For mid-latitude precipitation, we have to deal with large temperature gradients and complex relationship between the freezing level and cloud base levels. LUTs are constructed for LH, Q1-QR, and Q2 (Yanai et al. 1973), for six different precipitation types: Convective and shallow stratiform LUTs are made against precipitation top heights. For deep stratiform and other precipitation, LUTs are made against maximum precipitation to handle the unknown cloud-bases. Finally, three-dimensional convective latent heating is retrieved, utilizing the LUTs and precipitation profile data from GPM 2AKu. We can confirm that retrieved LH looks very similar to simulated LH, for a consistency check. We also confirm a good continuities of

  5. Potential climatic impacts and reliability of large-scale offshore wind farms

    International Nuclear Information System (INIS)

    Wang Chien; Prinn, Ronald G

    2011-01-01

    The vast availability of wind power has fueled substantial interest in this renewable energy source as a potential near-zero greenhouse gas emission technology for meeting future world energy needs while addressing the climate change issue. However, in order to provide even a fraction of the estimated future energy needs, a large-scale deployment of wind turbines (several million) is required. The consequent environmental impacts, and the inherent reliability of such a large-scale usage of intermittent wind power would have to be carefully assessed, in addition to the need to lower the high current unit wind power costs. Our previous study (Wang and Prinn 2010 Atmos. Chem. Phys. 10 2053) using a three-dimensional climate model suggested that a large deployment of wind turbines over land to meet about 10% of predicted world energy needs in 2100 could lead to a significant temperature increase in the lower atmosphere over the installed regions. A global-scale perturbation to the general circulation patterns as well as to the cloud and precipitation distribution was also predicted. In the later study reported here, we conducted a set of six additional model simulations using an improved climate model to further address the potential environmental and intermittency issues of large-scale deployment of offshore wind turbines for differing installation areas and spatial densities. In contrast to the previous land installation results, the offshore wind turbine installations are found to cause a surface cooling over the installed offshore regions. This cooling is due principally to the enhanced latent heat flux from the sea surface to lower atmosphere, driven by an increase in turbulent mixing caused by the wind turbines which was not entirely offset by the concurrent reduction of mean wind kinetic energy. We found that the perturbation of the large-scale deployment of offshore wind turbines to the global climate is relatively small compared to the case of land

  6. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  7. GPM Precipitation Estimates over the Walnut Gulch Experimental Watershed/LTAR site in Southeastern Arizona

    Science.gov (United States)

    Goodrich, D. C.; Tan, J.; Petersen, W. A.; Unkrich, C. C.; Demaria, E. M.; Hazenberg, P.; Lakshmi, V.

    2017-12-01

    Precipitation profiles from the GPM Core Observatory Dual-frequency Precipitation Radar (DPR) form part of the a priori database used in GPM Goddard Profiling (GPROF) algorithm passive microwave radiometer retrievals of rainfall. The GPROF retrievals are in turn used as high quality precipitation estimates in gridded products such as IMERG. Due to the variability in and high surface emissivity of land surfaces, GPROF performs precipitation retrievals as a function of surface classes. As such, different surface types may possess different error characteristics, especially over arid regions where high quality ground measurements are often lacking. Importantly, the emissive properties of land also result in GPROF rainfall estimates being driven primarily by the higher frequency radiometer channels (e.g., > 89 GHz) where precipitation signals are most sensitive to coupling between the ice-phase and rainfall production. In this study, we evaluate the rainfall estimates from the Ku channel of the DPR as well as GPROF estimates from various passive microwave sensors. Our evaluation is conducted at the level of individual satellite pixels (5 to 15 km in diameter), against a dense network of weighing rain gauges (90 in 150 km2) in the USDA-ARS Walnut Gulch Experimental Watershed and Long-Term Agroecosystem Research (LTAR) site in southeastern Arizona. The multiple gauges in each satellite pixel and precise accumulation about the overpass time allow a spatially and temporally representative comparison between the satellite estimates and ground reference. Over Walnut Gulch, both the Ku and GPROF estimates are challenged to delineate between rain and no-rain. Probabilities of detection are relatively high, but false alarm ratios are also high. The rain intensities possess a negative bias across nearly all sensors. It is likely that storm types, arid conditions and the highly variable precipitation regime present a challenge to both rainfall retrieval algorithms. An array of

  8. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Nusser, Adi [Physics Department and the Asher Space Science Institute-Technion, Haifa 32000 (Israel); Branchini, Enzo [Department of Physics, Universita Roma Tre, Via della Vasca Navale 84, 00146 Rome (Italy); Davis, Marc, E-mail: adi@physics.technion.ac.il, E-mail: branchin@fis.uniroma3.it, E-mail: mdavis@berkeley.edu [Departments of Astronomy and Physics, University of California, Berkeley, CA 94720 (United States)

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  9. Surface Runoff Estimation Using SMOS Observations, Rain-gauge Measurements and Satellite Precipitation Estimations. Comparison with Model Predictions

    Science.gov (United States)

    Garcia Leal, Julio A.; Lopez-Baeza, Ernesto; Khodayar, Samiro; Estrela, Teodoro; Fidalgo, Arancha; Gabaldo, Onofre; Kuligowski, Robert; Herrera, Eddy

    Surface runoff is defined as the amount of water that originates from precipitation, does not infiltrates due to soil saturation and therefore circulates over the surface. A good estimation of runoff is useful for the design of draining systems, structures for flood control and soil utilisation. For runoff estimation there exist different methods such as (i) rational method, (ii) isochrone method, (iii) triangular hydrograph, (iv) non-dimensional SCS hydrograph, (v) Temez hydrograph, (vi) kinematic wave model, represented by the dynamics and kinematics equations for a uniforme precipitation regime, and (vii) SCS-CN (Soil Conservation Service Curve Number) model. This work presents a way of estimating precipitation runoff through the SCS-CN model, using SMOS (Soil Moisture and Ocean Salinity) mission soil moisture observations and rain-gauge measurements, as well as satellite precipitation estimations. The area of application is the Jucar River Basin Authority area where one of the objectives is to develop the SCS-CN model in a spatial way. The results were compared to simulations performed with the 7-km COSMO-CLM (COnsortium for Small-scale MOdelling, COSMO model in CLimate Mode) model. The use of SMOS soil moisture as input to the COSMO-CLM model will certainly improve model simulations.

  10. Estimation of the impact of climate change-induced extreme precipitation events on floods

    Science.gov (United States)

    Hlavčová, Kamila; Lapin, Milan; Valent, Peter; Szolgay, Ján; Kohnová, Silvia; Rončák, Peter

    2015-09-01

    In order to estimate possible changes in the flood regime in the mountainous regions of Slovakia, a simple physically-based concept for climate change-induced changes in extreme 5-day precipitation totals is proposed in the paper. It utilizes regionally downscaled scenarios of the long-term monthly means of the air temperature, specific air humidity and precipitation projected for Central Slovakia by two regional (RCM) and two global circulation models (GCM). A simplified physically-based model for the calculation of short-term precipitation totals over the course of changing air temperatures, which is used to drive a conceptual rainfall-runoff model, was proposed. In the paper a case study of this approach in the upper Hron river basin in Central Slovakia is presented. From the 1981-2010 period, 20 events of the basin's most extreme average of 5-day precipitation totals were selected. Only events with continual precipitation during 5 days were considered. These 5-day precipitation totals were modified according to the RCM and GCM-based scenarios for the future time horizons of 2025, 2050 and 2075. For modelling runoff under changed 5-day precipitation totals, a conceptual rainfall-runoff model developed at the Slovak University of Technology was used. Changes in extreme mean daily discharges due to climate change were compared with the original flood events and discussed.

  11. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  12. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  13. Neutrinos and large-scale structure

    International Nuclear Information System (INIS)

    Eisenstein, Daniel J.

    2015-01-01

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos

  14. Puzzles of large scale structure and gravitation

    International Nuclear Information System (INIS)

    Sidharth, B.G.

    2006-01-01

    We consider the puzzle of cosmic voids bounded by two-dimensional structures of galactic clusters as also a puzzle pointed out by Weinberg: How can the mass of a typical elementary particle depend on a cosmic parameter like the Hubble constant? An answer to the first puzzle is proposed in terms of 'Scaled' Quantum Mechanical like behaviour which appears at large scales. The second puzzle can be answered by showing that the gravitational mass of an elementary particle has a Machian character (see Ahmed N. Cantorian small worked, Mach's principle and the universal mass network. Chaos, Solitons and Fractals 2004;21(4))

  15. Neutrinos and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, Daniel J. [Daniel J. Eisenstein, Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS #20, Cambridge, MA 02138 (United States)

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  16. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  17. Large scale phononic metamaterials for seismic isolation

    International Nuclear Information System (INIS)

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-01-01

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials

  18. Evaluating the applicability of four recent satellite–gauge combined precipitation estimates for extreme precipitation and streamflow predictions over the upper Yellow river basin in China

    Science.gov (United States)

    This study aimed to statistically and hydrologically assess the performance of four latest and widely used satellite–gauge combined precipitation estimates (SGPEs), namely CRT, BLD, 3B42CDR, and 3B42 for the extreme precipitation and stream'ow scenarios over the upper Yellow river basin (UYRB) in ch...

  19. Improving Satellite Quantitative Precipitation Estimation Using GOES-Retrieved Cloud Optical Depth

    Energy Technology Data Exchange (ETDEWEB)

    Stenz, Ronald; Dong, Xiquan; Xi, Baike; Feng, Zhe; Kuligowski, Robert J.

    2016-02-01

    To address significant gaps in ground-based radar coverage and rain gauge networks in the U.S., geostationary satellite quantitative precipitation estimates (QPEs) such as the Self-Calibrating Multivariate Precipitation Retrievals (SCaMPR) can be used to fill in both the spatial and temporal gaps of ground-based measurements. Additionally, with the launch of GOES-R, the temporal resolution of satellite QPEs may be comparable to that of Weather Service Radar-1988 Doppler (WSR-88D) volume scans as GOES images will be available every five minutes. However, while satellite QPEs have strengths in spatial coverage and temporal resolution, they face limitations particularly during convective events. Deep Convective Systems (DCSs) have large cloud shields with similar brightness temperatures (BTs) over nearly the entire system, but widely varying precipitation rates beneath these clouds. Geostationary satellite QPEs relying on the indirect relationship between BTs and precipitation rates often suffer from large errors because anvil regions (little/no precipitation) cannot be distinguished from rain-cores (heavy precipitation) using only BTs. However, a combination of BTs and optical depth (τ) has been found to reduce overestimates of precipitation in anvil regions (Stenz et al. 2014). A new rain mask algorithm incorporating both τ and BTs has been developed, and its application to the existing SCaMPR algorithm was evaluated. The performance of the modified SCaMPR was evaluated using traditional skill scores and a more detailed analysis of performance in individual DCS components by utilizing the Feng et al. (2012) classification algorithm. SCaMPR estimates with the new rain mask applied benefited from significantly reduced overestimates of precipitation in anvil regions and overall improvements in skill scores.

  20. Responses of Cloud Type Distributions to the Large-Scale Dynamical Circulation: Water Budget-Related Dynamical Phase Space and Dynamical Regimes

    Science.gov (United States)

    Wong, Sun; Del Genio, Anthony; Wang, Tao; Kahn, Brian; Fetzer, Eric J.; L'Ecuyer, Tristan S.

    2015-01-01

    Goals: Water budget-related dynamical phase space; Connect large-scale dynamical conditions to atmospheric water budget (including precipitation); Connect atmospheric water budget to cloud type distributions.

  1. Dipolar modulation of Large-Scale Structure

    Science.gov (United States)

    Yoon, Mijin

    For the last two decades, we have seen a drastic development of modern cosmology based on various observations such as the cosmic microwave background (CMB), type Ia supernovae, and baryonic acoustic oscillations (BAO). These observational evidences have led us to a great deal of consensus on the cosmological model so-called LambdaCDM and tight constraints on cosmological parameters consisting the model. On the other hand, the advancement in cosmology relies on the cosmological principle: the universe is isotropic and homogeneous on large scales. Testing these fundamental assumptions is crucial and will soon become possible given the planned observations ahead. Dipolar modulation is the largest angular anisotropy of the sky, which is quantified by its direction and amplitude. We measured a huge dipolar modulation in CMB, which mainly originated from our solar system's motion relative to CMB rest frame. However, we have not yet acquired consistent measurements of dipolar modulations in large-scale structure (LSS), as they require large sky coverage and a number of well-identified objects. In this thesis, we explore measurement of dipolar modulation in number counts of LSS objects as a test of statistical isotropy. This thesis is based on two papers that were published in peer-reviewed journals. In Chapter 2 [Yoon et al., 2014], we measured a dipolar modulation in number counts of WISE matched with 2MASS sources. In Chapter 3 [Yoon & Huterer, 2015], we investigated requirements for detection of kinematic dipole in future surveys.

  2. Internationalization Measures in Large Scale Research Projects

    Science.gov (United States)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  3. Large scale integration of photovoltaics in cities

    International Nuclear Information System (INIS)

    Strzalka, Aneta; Alam, Nazmul; Duminil, Eric; Coors, Volker; Eicker, Ursula

    2012-01-01

    Highlights: ► We implement the photovoltaics on a large scale. ► We use three-dimensional modelling for accurate photovoltaic simulations. ► We consider the shadowing effect in the photovoltaic simulation. ► We validate the simulated results using detailed hourly measured data. - Abstract: For a large scale implementation of photovoltaics (PV) in the urban environment, building integration is a major issue. This includes installations on roof or facade surfaces with orientations that are not ideal for maximum energy production. To evaluate the performance of PV systems in urban settings and compare it with the building user’s electricity consumption, three-dimensional geometry modelling was combined with photovoltaic system simulations. As an example, the modern residential district of Scharnhauser Park (SHP) near Stuttgart/Germany was used to calculate the potential of photovoltaic energy and to evaluate the local own consumption of the energy produced. For most buildings of the district only annual electrical consumption data was available and only selected buildings have electronic metering equipment. The available roof area for one of these multi-family case study buildings was used for a detailed hourly simulation of the PV power production, which was then compared to the hourly measured electricity consumption. The results were extrapolated to all buildings of the analyzed area by normalizing them to the annual consumption data. The PV systems can produce 35% of the quarter’s total electricity consumption and half of this generated electricity is directly used within the buildings.

  4. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  5. Large-scale Intelligent Transporation Systems simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  6. LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM

    Science.gov (United States)

    Higgins, G.H.; Crane, W.W.T.

    1959-05-19

    A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)

  7. Estimating Reservoir Inflow Using RADAR Forecasted Precipitation and Adaptive Neuro Fuzzy Inference System

    Science.gov (United States)

    Yi, J.; Choi, C.

    2014-12-01

    Rainfall observation and forecasting using remote sensing such as RADAR(Radio Detection and Ranging) and satellite images are widely used to delineate the increased damage by rapid weather changeslike regional storm and flash flood. The flood runoff was calculated by using adaptive neuro-fuzzy inference system, the data driven models and MAPLE(McGill Algorithm for Precipitation Nowcasting by Lagrangian Extrapolation) forecasted precipitation data as the input variables.The result of flood estimation method using neuro-fuzzy technique and RADAR forecasted precipitation data was evaluated by comparing it with the actual data.The Adaptive Neuro Fuzzy method was applied to the Chungju Reservoir basin in Korea. The six rainfall events during the flood seasons in 2010 and 2011 were used for the input data.The reservoir inflow estimation results were comparedaccording to the rainfall data used for training, checking and testing data in the model setup process. The results of the 15 models with the combination of the input variables were compared and analyzed. Using the relatively larger clustering radius and the biggest flood ever happened for training data showed the better flood estimation in this study.The model using the MAPLE forecasted precipitation data showed better result for inflow estimation in the Chungju Reservoir.

  8. Radar rainfall estimation of stratiform winter precipitation in the Belgian Ardennes

    NARCIS (Netherlands)

    Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.

    2011-01-01

    Radars are known for their ability to obtain a wealth of information about spatial storm field characteristics. Unfortunately, rainfall estimates obtained by this instrument are known to be affected by multiple sources of error. Especially for stratiform precipitation systems, the quality of radar

  9. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    survival and recruitment estimates from the French CES scheme to assess the relative contributions of survival and recruitment to overall population changes. He develops a novel approach to modelling survival rates from such multi–site data by using within–year recaptures to provide a covariate of between–year recapture rates. This provided parsimonious models of variation in recapture probabilities between sites and years. The approach provides promising results for the four species investigated and can potentially be extended to similar data from other CES/MAPS schemes. The final paper by Blandine Doligez, David Thomson and Arie van Noordwijk (Doligez et al., 2004 illustrates how large-scale studies of population dynamics can be important for evaluating the effects of conservation measures. Their study is concerned with the reintroduction of White Stork populations to the Netherlands where a re–introduction programme started in 1969 had resulted in a breeding population of 396 pairs by 2000. They demonstrate the need to consider a wide range of models in order to account for potential age, time, cohort and “trap–happiness” effects. As the data are based on resightings such trap–happiness must reflect some form of heterogeneity in resighting probabilities. Perhaps surprisingly, the provision of supplementary food did not influence survival, but it may havehad an indirect effect via the alteration of migratory behaviour. Spatially explicit modelling of data gathered at many sites inevitably results in starting models with very large numbers of parameters. The problem is often complicated further by having relatively sparse data at each site, even where the total amount of data gathered is very large. Both Julliard (2004 and Doligez et al. (2004 give explicit examples of problems caused by needing to handle very large numbers of parameters and show how they overcame them for their particular data sets. Such problems involve both the choice of appropriate

  10. Improving real-time estimation of heavy-to-extreme precipitation using rain gauge data via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Seo, Dong-Jun; Siddique, Ridwan; Zhang, Yu; Kim, Dongsoo

    2014-11-01

    A new technique for gauge-only precipitation analysis for improved estimation of heavy-to-extreme precipitation is described and evaluated. The technique is based on a novel extension of classical optimal linear estimation theory in which, in addition to error variance, Type-II conditional bias (CB) is explicitly minimized. When cast in the form of well-known kriging, the methodology yields a new kriging estimator, referred to as CB-penalized kriging (CBPK). CBPK, however, tends to yield negative estimates in areas of no or light precipitation. To address this, an extension of CBPK, referred to herein as extended conditional bias penalized kriging (ECBPK), has been developed which combines the CBPK estimate with a trivial estimate of zero precipitation. To evaluate ECBPK, we carried out real-world and synthetic experiments in which ECBPK and the gauge-only precipitation analysis procedure used in the NWS's Multisensor Precipitation Estimator (MPE) were compared for estimation of point precipitation and mean areal precipitation (MAP), respectively. The results indicate that ECBPK improves hourly gauge-only estimation of heavy-to-extreme precipitation significantly. The improvement is particularly large for estimation of MAP for a range of combinations of basin size and rain gauge network density. This paper describes the technique, summarizes the results and shares ideas for future research.

  11. Radar-derived quantitative precipitation estimation in complex terrain over the eastern Tibetan Plateau

    Science.gov (United States)

    Gou, Yabin; Ma, Yingzhao; Chen, Haonan; Wen, Yixin

    2018-05-01

    Quantitative precipitation estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex spatial and temporal variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3264 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profile of reflectivity (VPR) clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method for all precipitation events in terms of score comparison using validation gauge measurements as references. It is also found that the SCIT-based approach can effectively mitigate the local error of radar QPE and represent the precipitation spatiotemporal variability better than the RT-based scheme.

  12. Estimation of precipitable water at different locations using surface dew-point

    Science.gov (United States)

    Abdel Wahab, M.; Sharif, T. A.

    1995-09-01

    The Reitan (1963) regression equation of the form ln w = a + bT d has been examined and tested to estimate precipitable water vapor content from the surface dew point temperature at different locations. The results of this study indicate that the slope b of the above equation has a constant value of 0.0681, while the intercept a changes rapidly with latitude. The use of the variable intercept technique can improve the estimated result by about 2%.

  13. Radiations: large scale monitoring in Japan

    International Nuclear Information System (INIS)

    Linton, M.; Khalatbari, A.

    2011-01-01

    As the consequences of radioactive leaks on their health are a matter of concern for Japanese people, a large scale epidemiological study has been launched by the Fukushima medical university. It concerns the two millions inhabitants of the Fukushima Prefecture. On the national level and with the support of public funds, medical care and follow-up, as well as systematic controls are foreseen, notably to check the thyroid of 360.000 young people less than 18 year old and of 20.000 pregnant women in the Fukushima Prefecture. Some measurements have already been performed on young children. Despite the sometimes rather low measures, and because they know that some parts of the area are at least as much contaminated as it was the case around Chernobyl, some people are reluctant to go back home

  14. Large-scale digitizer system, analog converters

    International Nuclear Information System (INIS)

    Althaus, R.F.; Lee, K.L.; Kirsten, F.A.; Wagner, L.J.

    1976-10-01

    Analog to digital converter circuits that are based on the sharing of common resources, including those which are critical to the linearity and stability of the individual channels, are described. Simplicity of circuit composition is valued over other more costly approaches. These are intended to be applied in a large-scale processing and digitizing system for use with high-energy physics detectors such as drift-chambers or phototube-scintillator arrays. Signal distribution techniques are of paramount importance in maintaining adequate signal-to-noise ratio. Noise in both amplitude and time-jitter senses is held sufficiently low so that conversions with 10-bit charge resolution and 12-bit time resolution are achieved

  15. Engineering management of large scale systems

    Science.gov (United States)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  16. Grid sensitivity capability for large scale structures

    Science.gov (United States)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  17. Large - scale Rectangular Ruler Automated Verification Device

    Science.gov (United States)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  18. Large Scale Landform Mapping Using Lidar DEM

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-08-01

    Full Text Available In this study, LIDAR DEM data was used to obtain a primary landform map in accordance with a well-known methodology. This primary landform map was generalized using the Focal Statistics tool (Majority, considering the minimum area condition in cartographic generalization in order to obtain landform maps at 1:1000 and 1:5000 scales. Both the primary and the generalized landform maps were verified visually with hillshaded DEM and an orthophoto. As a result, these maps provide satisfactory visuals of the landforms. In order to show the effect of generalization, the area of each landform in both the primary and the generalized maps was computed. Consequently, landform maps at large scales could be obtained with the proposed methodology, including generalization using LIDAR DEM.

  19. Constructing sites on a large scale

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2011-01-01

    Since the 1990s, the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. A prominent example of a contemporary spatial development approach...... for setting the design brief in a large scale urban landscape in Norway, the Jaeren region around the city of Stavanger. In this paper, we first outline the methodological challenges and then present and discuss the proposed method based on our teaching experiences. On this basis, we discuss aspects...... is the IBA Emscher Park in the Ruhr area in Germany. Over a 10 years period (1988-1998), more than a 100 local transformation projects contributed to the transformation from an industrial to a post-industrial region. The current paradigm of planning by projects reinforces the role of the design disciplines...

  20. Large scale study of tooth enamel

    International Nuclear Information System (INIS)

    Bodart, F.; Deconninck, G.; Martin, M.T.

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. On hundred eighty samples of teeth were first analyzed using PIXE, backscattering and nuclear reaction techniques. The results were analyzed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population. (author)

  1. Testing Einstein's Gravity on Large Scales

    Science.gov (United States)

    Prescod-Weinstein, Chandra

    2011-01-01

    A little over a decade has passed since two teams studying high redshift Type Ia supernovae announced the discovery that the expansion of the universe was accelerating. After all this time, we?re still not sure how cosmic acceleration fits into the theory that tells us about the large-scale universe: General Relativity (GR). As part of our search for answers, we have been forced to question GR itself. But how will we test our ideas? We are fortunate enough to be entering the era of precision cosmology, where the standard model of gravity can be subjected to more rigorous testing. Various techniques will be employed over the next decade or two in the effort to better understand cosmic acceleration and the theory behind it. In this talk, I will describe cosmic acceleration, current proposals to explain it, and weak gravitational lensing, an observational effect that allows us to do the necessary precision cosmology.

  2. Large-Scale Astrophysical Visualization on Smartphones

    Science.gov (United States)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  3. Evaluation of spatial and spatiotemporal estimation methods in simulation of precipitation variability patterns

    Science.gov (United States)

    Bayat, Bardia; Zahraie, Banafsheh; Taghavi, Farahnaz; Nasseri, Mohsen

    2013-08-01

    Identification of spatial and spatiotemporal precipitation variations plays an important role in different hydrological applications such as missing data estimation. In this paper, the results of Bayesian maximum entropy (BME) and ordinary kriging (OK) are compared for modeling spatial and spatiotemporal variations of annual precipitation with and without incorporating elevation variations. The study area of this research is Namak Lake watershed located in the central part of Iran with an area of approximately 90,000 km2. The BME and OK methods have been used to model the spatial and spatiotemporal variations of precipitation in this watershed, and their performances have been evaluated using cross-validation statistics. The results of the case study have shown the superiority of BME over OK in both spatial and spatiotemporal modes. The results have shown that BME estimates are less biased and more accurate than OK. The improvements in the BME estimates are mostly related to incorporating hard and soft data in the estimation process, which resulted in more detailed and reliable results. Estimation error variance for BME results is less than OK estimations in the study area in both spatial and spatiotemporal modes.

  4. Hydrological Storage Length Scales Represented by Remote Sensing Estimates of Soil Moisture and Precipitation

    Science.gov (United States)

    Akbar, Ruzbeh; Short Gianotti, Daniel; McColl, Kaighin A.; Haghighi, Erfan; Salvucci, Guido D.; Entekhabi, Dara

    2018-03-01

    The soil water content profile is often well correlated with the soil moisture state near the surface. They share mutual information such that analysis of surface-only soil moisture is, at times and in conjunction with precipitation information, reflective of deeper soil fluxes and dynamics. This study examines the characteristic length scale, or effective depth Δz, of a simple active hydrological control volume. The volume is described only by precipitation inputs and soil water dynamics evident in surface-only soil moisture observations. To proceed, first an observation-based technique is presented to estimate the soil moisture loss function based on analysis of soil moisture dry-downs and its successive negative increments. Then, the length scale Δz is obtained via an optimization process wherein the root-mean-squared (RMS) differences between surface soil moisture observations and its predictions based on water balance are minimized. The process is entirely observation-driven. The surface soil moisture estimates are obtained from the NASA Soil Moisture Active Passive (SMAP) mission and precipitation from the gauge-corrected Climate Prediction Center daily global precipitation product. The length scale Δz exhibits a clear east-west gradient across the contiguous United States (CONUS), such that large Δz depths (>200 mm) are estimated in wetter regions with larger mean precipitation. The median Δz across CONUS is 135 mm. The spatial variance of Δz is predominantly explained and influenced by precipitation characteristics. Soil properties, especially texture in the form of sand fraction, as well as the mean soil moisture state have a lesser influence on the length scale.

  5. Local likelihood estimation of complex tail dependence structures in high dimensions, applied to US precipitation extremes

    KAUST Repository

    Camilo, Daniela Castro

    2017-10-02

    In order to model the complex non-stationary dependence structure of precipitation extremes over the entire contiguous U.S., we propose a flexible local approach based on factor copula models. Our sub-asymptotic spatial modeling framework yields non-trivial tail dependence structures, with a weakening dependence strength as events become more extreme, a feature commonly observed with precipitation data but not accounted for in classical asymptotic extreme-value models. To estimate the local extremal behavior, we fit the proposed model in small regional neighborhoods to high threshold exceedances, under the assumption of local stationarity. This allows us to gain in flexibility, while making inference for such a large and complex dataset feasible. Adopting a local censored likelihood approach, inference is made on a fine spatial grid, and local estimation is performed taking advantage of distributed computing resources and of the embarrassingly parallel nature of this estimation procedure. The local model is efficiently fitted at all grid points, and uncertainty is measured using a block bootstrap procedure. An extensive simulation study shows that our approach is able to adequately capture complex, non-stationary dependencies, while our study of U.S. winter precipitation data reveals interesting differences in local tail structures over space, which has important implications on regional risk assessment of extreme precipitation events. A comparison between past and current data suggests that extremes in certain areas might be slightly wider in extent nowadays than during the first half of the twentieth century.

  6. Local likelihood estimation of complex tail dependence structures in high dimensions, applied to US precipitation extremes

    KAUST Repository

    Camilo, Daniela Castro; Huser, Raphaë l

    2017-01-01

    In order to model the complex non-stationary dependence structure of precipitation extremes over the entire contiguous U.S., we propose a flexible local approach based on factor copula models. Our sub-asymptotic spatial modeling framework yields non-trivial tail dependence structures, with a weakening dependence strength as events become more extreme, a feature commonly observed with precipitation data but not accounted for in classical asymptotic extreme-value models. To estimate the local extremal behavior, we fit the proposed model in small regional neighborhoods to high threshold exceedances, under the assumption of local stationarity. This allows us to gain in flexibility, while making inference for such a large and complex dataset feasible. Adopting a local censored likelihood approach, inference is made on a fine spatial grid, and local estimation is performed taking advantage of distributed computing resources and of the embarrassingly parallel nature of this estimation procedure. The local model is efficiently fitted at all grid points, and uncertainty is measured using a block bootstrap procedure. An extensive simulation study shows that our approach is able to adequately capture complex, non-stationary dependencies, while our study of U.S. winter precipitation data reveals interesting differences in local tail structures over space, which has important implications on regional risk assessment of extreme precipitation events. A comparison between past and current data suggests that extremes in certain areas might be slightly wider in extent nowadays than during the first half of the twentieth century.

  7. Cosmological parameters from large scale structure - geometric versus shape information

    CERN Document Server

    Hamann, Jan; Lesgourgues, Julien; Rampf, Cornelius; Wong, Yvonne Y Y

    2010-01-01

    The matter power spectrum as derived from large scale structure (LSS) surveys contains two important and distinct pieces of information: an overall smooth shape and the imprint of baryon acoustic oscillations (BAO). We investigate the separate impact of these two types of information on cosmological parameter estimation, and show that for the simplest cosmological models, the broad-band shape information currently contained in the SDSS DR7 halo power spectrum (HPS) is by far superseded by geometric information derived from the baryonic features. An immediate corollary is that contrary to popular beliefs, the upper limit on the neutrino mass m_\

  8. Precipitation areal-reduction factor estimation using an annual-maxima centered approach

    Science.gov (United States)

    Asquith, W.H.; Famiglietti, J.S.

    2000-01-01

    The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are often computed by multiplying point depths by areal-reduction factors (ARF). ARF range from 0 to 1, vary according to storm characteristics, such as recurrence interval; and are a function of watershed characteristics, such as watershed size, shape, and geographic location. This paper presents a new approach for estimating ARF and includes applications for the 1-day design storm in Austin, Dallas, and Houston, Texas. The approach, termed 'annual-maxima centered,' specifically considers the distribution of concurrent precipitation surrounding an annual-precipitation maxima, which is a feature not seen in other approaches. The approach does not require the prior spatial averaging of precipitation, explicit determination of spatial correlation coefficients, nor explicit definition of a representative area of a particular storm in the analysis. The annual-maxima centered approach was designed to exploit the wide availability of dense precipitation gauge data in many regions of the world. The approach produces ARF that decrease more rapidly than those from TP-29. Furthermore, the ARF from the approach decay rapidly with increasing recurrence interval of the annual-precipitation maxima. (C) 2000 Elsevier Science B.V.The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are

  9. Stochastic characterization of regional circulation patterns for climate model diagnosis and estimation of local precipitation

    International Nuclear Information System (INIS)

    Zorita, E.; Hughes, J.P.

    1993-01-01

    Two statistical approaches for linking large-scale atmospheric circulation patterns and daily local rainfall are described and applied to several GCM (general circulation model) climate simulations. The ultimate objective is to simulate local precipitation associated with alternative climates. The index stations are located near the West and East North American coasts. The first method is based on CART analysis (Classification and Regression trees). It finds the classification of observed daily SLR (sea level pressure) fields in weather types that are most strongly associated with the presence/absence of rainfall in a set of index stations. The best results were obtained for winter rainfall for the West Coast, where a set of physically reasonable weather types could be identified, whereas for the East Coast the rainfall process seemed to be spatially less coherent. The GCM simulations were validated against observations in terms of probability of occurrence and survival time of these weather states. Some discrepancies werefound but there was no systematic bias, indicating that this behavior depends on the particular dynamics of each model. This classification method was then used for the generation of daily rainfall time series from the daily SLP fields from historical observation and from the GCM simulations. Whereas the mean rainfall and probability distributions were rather well replicated, the simulated dry periods were in all cases shorter than in the rainfall observations. The second rainfall generator is based on the analog method and uses information on the evolution of the SLP field in several previous days. It was found to perform reasonably well, although some downward bias in the simulated rainfall persistence was still present. Rainfall changes in a 2xCO 2 climate were investigated by applying both methods to the output of a greenhouse-gas experiment. The simulated precipitation changes were small. (orig.)

  10. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    Science.gov (United States)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  11. An "Ensemble Approach" to Modernizing Extreme Precipitation Estimation for Dam Safety Decision-Making

    Science.gov (United States)

    Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.

    2017-12-01

    To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the

  12. Large-scale stochasticity in Hamiltonian systems

    International Nuclear Information System (INIS)

    Escande, D.F.

    1982-01-01

    Large scale stochasticity (L.S.S.) in Hamiltonian systems is defined on the paradigm Hamiltonian H(v,x,t) =v 2 /2-M cos x-P cos k(x-t) which describes the motion of one particle in two electrostatic waves. A renormalization transformation Tsub(r) is described which acts as a microscope that focusses on a given KAM (Kolmogorov-Arnold-Moser) torus in phase space. Though approximate, Tsub(r) yields the threshold of L.S.S. in H with an error of 5-10%. The universal behaviour of KAM tori is predicted: for instance the scale invariance of KAM tori and the critical exponent of the Lyapunov exponent of Cantori. The Fourier expansion of KAM tori is computed and several conjectures by L. Kadanoff and S. Shenker are proved. Chirikov's standard mapping for stochastic layers is derived in a simpler way and the width of the layers is computed. A simpler renormalization scheme for these layers is defined. A Mathieu equation for describing the stability of a discrete family of cycles is derived. When combined with Tsub(r), it allows to prove the link between KAM tori and nearby cycles, conjectured by J. Greene and, in particular, to compute the mean residue of a torus. The fractal diagrams defined by G. Schmidt are computed. A sketch of a methodology for computing the L.S.S. threshold in any two-degree-of-freedom Hamiltonian system is given. (Auth.)

  13. Large scale molecular simulations of nanotoxicity.

    Science.gov (United States)

    Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong

    2014-01-01

    The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.

  14. Large-scale tides in general relativity

    Energy Technology Data Exchange (ETDEWEB)

    Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  15. Food appropriation through large scale land acquisitions

    International Nuclear Information System (INIS)

    Cristina Rulli, Maria; D’Odorico, Paolo

    2014-01-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300–550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190–370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations. (letter)

  16. Large Scale EOF Analysis of Climate Data

    Science.gov (United States)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  17. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  18. Radar rainfall estimation of stratiform winter precipitation in the Belgian Ardennes

    Science.gov (United States)

    Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.

    2011-02-01

    Radars are known for their ability to obtain a wealth of information about spatial storm field characteristics. Unfortunately, rainfall estimates obtained by this instrument are known to be affected by multiple sources of error. Especially for stratiform precipitation systems, the quality of radar rainfall estimates starts to decrease at relatively close ranges. In the current study, the hydrological potential of weather radar is analyzed during a winter half-year for the hilly region of the Belgian Ardennes. A correction algorithm is proposed which corrects the radar data for errors related to attenuation, ground clutter, anomalous propagation, the vertical profile of reflectivity (VPR), and advection. No final bias correction with respect to rain gauge data was implemented because such an adjustment would not add to a better understanding of the quality of the radar data. The impact of the different corrections is assessed using rainfall information sampled by 42 hourly rain gauges. The largest improvement in the quality of the radar data is obtained by correcting for ground clutter. The impact of VPR correction and advection depends on the spatial variability and velocity of the precipitation system. Overall during the winter period, the radar underestimates the amount of precipitation as compared to the rain gauges. Remaining differences between both instruments can be attributed to spatial and temporal variability in the type of precipitation, which has not been taken into account.

  19. Kriging and local polynomial methods for blending satellite-derived and gauge precipitation estimates to support hydrologic early warning systems

    Science.gov (United States)

    Verdin, Andrew; Funk, Christopher C.; Rajagopalan, Balaji; Kleiber, William

    2016-01-01

    Robust estimates of precipitation in space and time are important for efficient natural resource management and for mitigating natural hazards. This is particularly true in regions with developing infrastructure and regions that are frequently exposed to extreme events. Gauge observations of rainfall are sparse but capture the precipitation process with high fidelity. Due to its high resolution and complete spatial coverage, satellite-derived rainfall data are an attractive alternative in data-sparse regions and are often used to support hydrometeorological early warning systems. Satellite-derived precipitation data, however, tend to underrepresent extreme precipitation events. Thus, it is often desirable to blend spatially extensive satellite-derived rainfall estimates with high-fidelity rain gauge observations to obtain more accurate precipitation estimates. In this research, we use two different methods, namely, ordinary kriging and κ-nearest neighbor local polynomials, to blend rain gauge observations with the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates in data-sparse Central America and Colombia. The utility of these methods in producing blended precipitation estimates at pentadal (five-day) and monthly time scales is demonstrated. We find that these blending methods significantly improve the satellite-derived estimates and are competitive in their ability to capture extreme precipitation.

  20. Meteorological impact assessment of possible large scale irrigation in Southwest Saudi Arabia

    NARCIS (Netherlands)

    Maat, ter H.W.; Hutjes, R.W.A.; Ohba, R.; Ueda, H.; Bisselink, B.; Bauer, T.

    2006-01-01

    On continental to regional scales feedbacks between landuse and landcover change and climate have been widely documented over the past 10¿15 years. In the present study we explore the possibility that also vegetation changes over much smaller areas may affect local precipitation regimes. Large scale

  1. Dose monitoring in large-scale flowing aqueous media

    International Nuclear Information System (INIS)

    Kuruca, C.N.

    1995-01-01

    The Miami Electron Beam Research Facility (EBRF) has been in operation for six years. The EBRF houses a 1.5 MV, 75 KW DC scanned electron beam. Experiments have been conducted to evaluate the effectiveness of high-energy electron irradiation in the removal of toxic organic chemicals from contaminated water and the disinfection of various wastewater streams. The large-scale plant operates at approximately 450 L/min (120 gal/min). The radiation dose absorbed by the flowing aqueous streams is estimated by measuring the difference in water temperature before and after it passes in front of the beam. Temperature measurements are made using resistance temperature devices (RTDs) and recorded by computer along with other operating parameters. Estimated dose is obtained from the measured temperature differences using the specific heat of water. This presentation will discuss experience with this measurement system, its application to different water presentation devices, sources of error, and the advantages and disadvantages of its use in large-scale process applications

  2. The impact of reflectivity correction and conversion methods to improve precipitation estimation by weather radar for an extreme low-land Mesoscale Convective System

    Science.gov (United States)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-05-01

    , stratiform and undefined). These are then used to obtain coherent parameter sets for the radar reflectivity-rainfall rate (Z-R) and radar reflectivity-attenuation (Z-k) relationship, specifically applicable for this event. By applying a single parameter set to correct for both sources of errors, the quality of the rainfall product improves further, leading to >80% of the observed accumulations. However, by differentiating between precipitation type no better results are obtained as when using the operational relationships. This leads to the question: how representative are local disdrometer observations to correct large scale weather radar measurements? In order to tackle this question a Monte Carlo approach was used to generate >10000 sets of the normalized dropsize distribution parameters and to assess their impact on the estimated precipitation amounts. Results show that a large number of parameter sets result in improved precipitation estimated by the weather radar closely resembling observations. However, these optimal sets vary considerably as compared to those obtained from the local disdrometer measurements.

  3. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  4. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  5. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  6. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  7. Preliminary design study of a large scale graphite oxidation loop

    International Nuclear Information System (INIS)

    Epel, L.G.; Majeski, S.J.; Schweitzer, D.G.; Sheehan, T.V.

    1979-08-01

    A preliminary design study of a large scale graphite oxidation loop was performed in order to assess feasibility and to estimate capital costs. The nominal design operates at 50 atmospheres helium and 1800 F with a graphite specimen 30 inches long and 10 inches in diameter. It was determined that a simple single walled design was not practical at this time because of a lack of commercially available thick walled high temperature alloys. Two alternative concepts, at reduced operating pressure, were investigated. Both were found to be readily fabricable to operate at 1800 F and capital cost estimates for these are included. A design concept, which is outside the scope of this study, was briefly considered

  8. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  9. Sample-Starved Large Scale Network Analysis

    Science.gov (United States)

    2016-05-05

    As reported in our journal publication (G. Marjanovic and A. O. Hero, ”l0 Sparse Inverse Covariance Estimation,” IEEE Trans on Signal Processing, vol... Marjanovic and A. O. Hero, ”l0 Sparse Inverse Covariance Estimation,” in IEEE Trans on Signal Processing, vol. 63, no. 12, pp. 3218-3231, May 2015. 6. G

  10. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  11. Estimating Probable Maximum Precipitation by Considering Combined Effect of Typhoon and Southwesterly Air Flow

    Directory of Open Access Journals (Sweden)

    Cheng-Chin Liu

    2016-01-01

    Full Text Available Typhoon Morakot hit southern Taiwan in 2009, bringing 48-hr of heavy rainfall [close to the Probable Maximum Precipitation (PMP] to the Tsengwen Reservoir catchment. This extreme rainfall event resulted from the combined (co-movement effect of two climate systems (i.e., typhoon and southwesterly air flow. Based on the traditional PMP estimation method (i.e., the storm transposition method, STM, two PMP estimation approaches, i.e., Amplification Index (AI and Independent System (IS approaches, which consider the combined effect are proposed in this work. The AI approach assumes that the southwesterly air flow precipitation in a typhoon event could reach its maximum value. The IS approach assumes that the typhoon and southwesterly air flow are independent weather systems. Based on these assumptions, calculation procedures for the two approaches were constructed for a case study on the Tsengwen Reservoir catchment. The results show that the PMP estimates for 6- to 60-hr durations using the two approaches are approximately 30% larger than the PMP estimates using the traditional STM without considering the combined effect. This work is a pioneer PMP estimation method that considers the combined effect of a typhoon and southwesterly air flow. Further studies on this issue are essential and encouraged.

  12. Large scale dynamics of protoplanetary discs

    Science.gov (United States)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  13. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  14. Large-scale fuel cycle centres

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The US Nuclear Regulatory Commission (NRC) has considered the nuclear energy centre concept for fuel cycle plants in the Nuclear Energy Centre Site Survey 1975 (NECSS-75) Rep. No. NUREG-0001, an important study mandated by the US Congress in the Energy Reorganization Act of 1974 which created the NRC. For this study, the NRC defined fuel cycle centres as consisting of fuel reprocessing and mixed-oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle centre sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000-300,000MW(e). The types of fuel cycle facilities located at the fuel cycle centre permit the assessment of the role of fuel cycle centres in enhancing the safeguard of strategic special nuclear materials - plutonium and mixed oxides. Siting fuel cycle centres presents a smaller problem than siting reactors. A single reprocessing plant of the scale projected for use in the USA (1500-2000t/a) can reprocess fuel from reactors producing 50,000-65,000MW(e). Only two or three fuel cycle centres of the upper limit size considered in the NECSS-75 would be required in the USA by the year 2000. The NECSS-75 fuel cycle centre evaluation showed that large-scale fuel cycle centres present no real technical siting difficulties from a radiological effluent and safety standpoint. Some construction economies may be achievable with fuel cycle centres, which offer opportunities to improve waste-management systems. Combined centres consisting of reactors and fuel reprocessing and mixed-oxide fuel fabrication plants were also studied in the NECSS. Such centres can eliminate shipment not only of Pu but also mixed-oxide fuel. Increased fuel cycle costs result from implementation of combined centres unless the fuel reprocessing plants are commercial-sized. Development of Pu-burning reactors could reduce any economic penalties of combined centres. The need for effective fissile

  15. Precipitation Estimation Using Combined Radar/Radiometer Measurements Within the GPM Framework

    Science.gov (United States)

    Hou, Arthur

    2012-01-01

    satellite of JAXA, (3) the Multi-Frequency Microwave Scanning Radiometer (MADRAS) and the multi-channel microwave humidity sounder (SAPHIR) on the French-Indian Megha- Tropiques satellite, (4) the Microwave Humidity Sounder (MHS) on the National Oceanic and Atmospheric Administration (NOAA)-19, (5) MHS instruments on MetOp satellites launched by the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), (6) the Advanced Technology Microwave Sounder (ATMS) on the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), and (7) ATMS instruments on the NOAA-NASA Joint Polar Satellite System (JPSS) satellites. Data from Chinese and Russian microwave radiometers may also become available through international collaboration under the auspices of the Committee on Earth Observation Satellites (CEOS) and Group on Earth Observations (GEO). The current generation of global rainfall products combines observations from a network of uncoordinated satellite missions using a variety of merging techniques. GPM will provide next-generation precipitation products characterized by: (1) more accurate instantaneous precipitation estimate (especially for light rain and cold-season solid precipitation), (2) intercalibrated microwave brightness temperatures from constellation radiometers within a consistent framework, and (3) unified precipitation retrievals from constellation radiometers using a common a priori hydrometeor database constrained by combined radar/radiometer measurements provided by the GPM Core Observatory.

  16. Large-scale preparation of plasmid DNA.

    Science.gov (United States)

    Heilig, J S; Elbing, K L; Brent, R

    2001-05-01

    Although the need for large quantities of plasmid DNA has diminished as techniques for manipulating small quantities of DNA have improved, occasionally large amounts of high-quality plasmid DNA are desired. This unit describes the preparation of milligram quantities of highly purified plasmid DNA. The first part of the unit describes three methods for preparing crude lysates enriched in plasmid DNA from bacterial cells grown in liquid culture: alkaline lysis, boiling, and Triton lysis. The second part describes four methods for purifying plasmid DNA in such lysates away from contaminating RNA and protein: CsCl/ethidium bromide density gradient centrifugation, polyethylene glycol (PEG) precipitation, anion-exchange chromatography, and size-exclusion chromatography.

  17. Large scale CMB anomalies from thawing cosmic strings

    Energy Technology Data Exchange (ETDEWEB)

    Ringeval, Christophe [Centre for Cosmology, Particle Physics and Phenomenology, Institute of Mathematics and Physics, Louvain University, 2 Chemin du Cyclotron, 1348 Louvain-la-Neuve (Belgium); Yamauchi, Daisuke; Yokoyama, Jun' ichi [Research Center for the Early Universe (RESCEU), Graduate School of Science, The University of Tokyo, Tokyo 113-0033 (Japan); Bouchet, François R., E-mail: christophe.ringeval@uclouvain.be, E-mail: yamauchi@resceu.s.u-tokyo.ac.jp, E-mail: yokoyama@resceu.s.u-tokyo.ac.jp, E-mail: bouchet@iap.fr [Institut d' Astrophysique de Paris, UMR 7095-CNRS, Université Pierre et Marie Curie, 98bis boulevard Arago, 75014 Paris (France)

    2016-02-01

    Cosmic strings formed during inflation are expected to be either diluted over super-Hubble distances, i.e., invisible today, or to have crossed our past light cone very recently. We discuss the latter situation in which a few strings imprint their signature in the Cosmic Microwave Background (CMB) Anisotropies after recombination. Being almost frozen in the Hubble flow, these strings are quasi static and evade almost all of the previously derived constraints on their tension while being able to source large scale anisotropies in the CMB sky. Using a local variance estimator on thousand of numerically simulated Nambu-Goto all sky maps, we compute the expected signal and show that it can mimic a dipole modulation at large angular scales while being negligible at small angles. Interestingly, such a scenario generically produces one cold spot from the thawing of a cosmic string loop. Mixed with anisotropies of inflationary origin, we find that a few strings of tension GU = O(1) × 10{sup −6} match the amplitude of the dipole modulation reported in the Planck satellite measurements and could be at the origin of other large scale anomalies.

  18. Evaluating the MSG satellite Multi-Sensor Precipitation Estimate for extreme rainfall monitoring over northern Tunisia

    Directory of Open Access Journals (Sweden)

    Saoussen Dhib

    2017-06-01

    Full Text Available Knowledge and evaluation of extreme precipitation is important for water resources and flood risk management, soil and land degradation, and other environmental issues. Due to the high potential threat to local infrastructure, such as buildings, roads and power supplies, heavy precipitation can have an important social and economic impact on society. At present, satellite derived precipitation estimates are becoming more readily available. This paper aims to investigate the potential use of the Meteosat Second Generation (MSG Multi-Sensor Precipitation Estimate (MPE for extreme rainfall assessment in Tunisia. The MSGMPE data combine microwave rain rate estimations with SEVIRI thermal infrared channel data, using an EUMETSAT production chain in near real time mode. The MPE data can therefore be used in a now-casting mode, and are potentially useful for extreme weather early warning and monitoring. Daily precipitation observed across an in situ gauge network in the north of Tunisia were used during the period 2007–2009 for validation of the MPE extreme event data. As a first test of the MSGMPE product's performance, very light to moderate rainfall classes, occurring between January and October 2007, were evaluated. Extreme rainfall events were then selected, using a threshold criterion for large rainfall depth (>50 mm/day occurring at least at one ground station. Spatial interpolation methods were applied to generate rainfall maps for the drier summer season (from May to October and the wet winter season (from November to April. Interpolated gauge rainfall maps were then compared to MSGMPE data available from the EUMETSAT UMARF archive or from the GEONETCast direct dissemination system. The summation of the MPE data at 5 and/or 15 min time intervals over a 24 h period, provided a basis for comparison. The MSGMPE product was not very effective in the detection of very light and light rain events. Better results were obtained for the slightly

  19. Near-real-time Estimation and Forecast of Total Precipitable Water in Europe

    Science.gov (United States)

    Bartholy, J.; Kern, A.; Barcza, Z.; Pongracz, R.; Ihasz, I.; Kovacs, R.; Ferencz, C.

    2013-12-01

    Information about the amount and spatial distribution of atmospheric water vapor (or total precipitable water) is essential for understanding weather and the environment including the greenhouse effect, the climate system with its feedbacks and the hydrological cycle. Numerical weather prediction (NWP) models need accurate estimations of water vapor content to provide realistic forecasts including representation of clouds and precipitation. In the present study we introduce our research activity for the estimation and forecast of atmospheric water vapor in Central Europe using both observations and models. The Eötvös Loránd University (Hungary) operates a polar orbiting satellite receiving station in Budapest since 2002. This station receives Earth observation data from polar orbiting satellites including MODerate resolution Imaging Spectroradiometer (MODIS) Direct Broadcast (DB) data stream from satellites Terra and Aqua. The received DB MODIS data are automatically processed using freely distributed software packages. Using the IMAPP Level2 software total precipitable water is calculated operationally using two different methods. Quality of the TPW estimations is a crucial question for further application of the results, thus validation of the remotely sensed total precipitable water fields is presented using radiosonde data. In a current research project in Hungary we aim to compare different estimations of atmospheric water vapor content. Within the frame of the project we use a NWP model (DBCRAS; Direct Broadcast CIMSS Regional Assimilation System numerical weather prediction software developed by the University of Wisconsin, Madison) to forecast TPW. DBCRAS uses near real time Level2 products from the MODIS data processing chain. From the wide range of the derived Level2 products the MODIS TPW parameter found within the so-called mod07 results (Atmospheric Profiles Product) and the cloud top pressure and cloud effective emissivity parameters from the so

  20. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  1. Large-scale fuel cycle centers

    International Nuclear Information System (INIS)

    Smiley, S.H.; Black, K.M.

    1977-01-01

    The United States Nuclear Regulatory Commission (NRC) has considered the nuclear energy center concept for fuel cycle plants in the Nuclear Energy Center Site Survey - 1975 (NECSS-75) -- an important study mandated by the U.S. Congress in the Energy Reorganization Act of 1974 which created the NRC. For the study, NRC defined fuel cycle centers to consist of fuel reprocessing and mixed oxide fuel fabrication plants, and optional high-level waste and transuranic waste management facilities. A range of fuel cycle center sizes corresponded to the fuel throughput of power plants with a total capacity of 50,000 - 300,000 MWe. The types of fuel cycle facilities located at the fuel cycle center permit the assessment of the role of fuel cycle centers in enhancing safeguarding of strategic special nuclear materials -- plutonium and mixed oxides. Siting of fuel cycle centers presents a considerably smaller problem than the siting of reactors. A single reprocessing plant of the scale projected for use in the United States (1500-2000 MT/yr) can reprocess the fuel from reactors producing 50,000-65,000 MWe. Only two or three fuel cycle centers of the upper limit size considered in the NECSS-75 would be required in the United States by the year 2000 . The NECSS-75 fuel cycle center evaluations showed that large scale fuel cycle centers present no real technical difficulties in siting from a radiological effluent and safety standpoint. Some construction economies may be attainable with fuel cycle centers; such centers offer opportunities for improved waste management systems. Combined centers consisting of reactors and fuel reprocessing and mixed oxide fuel fabrication plants were also studied in the NECSS. Such centers can eliminate not only shipment of plutonium, but also mixed oxide fuel. Increased fuel cycle costs result from implementation of combined centers unless the fuel reprocessing plants are commercial-sized. Development of plutonium-burning reactors could reduce any

  2. Large-scale assembly of colloidal particles

    Science.gov (United States)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  3. Integrating weather and geotechnical monitoring data for assessing the stability of large scale surface mining operations

    Directory of Open Access Journals (Sweden)

    Steiakakis Chrysanthos

    2016-01-01

    Full Text Available The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions.

  4. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    Science.gov (United States)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of

  5. European Wintertime Windstorms and its Links to Large-Scale Variability Modes

    Science.gov (United States)

    Befort, D. J.; Wild, S.; Walz, M. A.; Knight, J. R.; Lockwood, J. F.; Thornton, H. E.; Hermanson, L.; Bett, P.; Weisheimer, A.; Leckebusch, G. C.

    2017-12-01

    Winter storms associated with extreme wind speeds and heavy precipitation are the most costly natural hazard in several European countries. Improved understanding and seasonal forecast skill of winter storms will thus help society, policy-makers and (re-) insurance industry to be better prepared for such events. We firstly assess the ability to represent extra-tropical windstorms over the Northern Hemisphere of three seasonal forecast ensemble suites: ECMWF System3, ECMWF System4 and GloSea5. Our results show significant skill for inter-annual variability of windstorm frequency over parts of Europe in two of these forecast suites (ECMWF-S4 and GloSea5) indicating the potential use of current seasonal forecast systems. In a regression model we further derive windstorm variability using the forecasted NAO from the seasonal model suites thus estimating the suitability of the NAO as the only predictor. We find that the NAO as the main large-scale mode over Europe can explain some of the achieved skill and is therefore an important source of variability in the seasonal models. However, our results show that the regression model fails to reproduce the skill level of the directly forecast windstorm frequency over large areas of central Europe. This suggests that the seasonal models also capture other sources of variability/predictability of windstorms than the NAO. In order to investigate which other large-scale variability modes steer the interannual variability of windstorms we develop a statistical model using a Poisson GLM. We find that the Scandinavian Pattern (SCA) in fact explains a larger amount of variability for Central Europe during the 20th century than the NAO. This statistical model is able to skilfully reproduce the interannual variability of windstorm frequency especially for the British Isles and Central Europe with correlations up to 0.8.

  6. Errors and parameter estimation in precipitation-runoff modeling: 1. Theory

    Science.gov (United States)

    Troutman, Brent M.

    1985-01-01

    Errors in complex conceptual precipitation-runoff models may be analyzed by placing them into a statistical framework. This amounts to treating the errors as random variables and defining the probabilistic structure of the errors. By using such a framework, a large array of techniques, many of which have been presented in the statistical literature, becomes available to the modeler for quantifying and analyzing the various sources of error. A number of these techniques are reviewed in this paper, with special attention to the peculiarities of hydrologic models. Known methodologies for parameter estimation (calibration) are particularly applicable for obtaining physically meaningful estimates and for explaining how bias in runoff prediction caused by model error and input error may contribute to bias in parameter estimation.

  7. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  8. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  9. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram; Kammoun, Abla; Alouini, Mohamed-Slim

    2017-01-01

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  10. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram

    2017-03-06

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  11. Large scale intender test program to measure sub gouge displacements

    Energy Technology Data Exchange (ETDEWEB)

    Been, Ken; Lopez, Juan [Golder Associates Inc, Houston, TX (United States); Sancio, Rodolfo [MMI Engineering Inc., Houston, TX (United States)

    2011-07-01

    The production of submarine pipelines in an offshore environment covered with ice is very challenging. Several precautions must be taken such as burying the pipelines to protect them from ice movement caused by gouging. The estimation of the subgouge displacements is a key factor in pipeline design for ice gouged environments. This paper investigated a method to measure subgouge displacements. An experimental program was implemented in an open field to produce large scale idealized gouges on engineered soil beds (sand and clay). The horizontal force required to produce the gouge, the subgouge displacements in the soil and the strain imposed by these displacements were monitored on a buried model pipeline. The results showed that for a given keel, the gouge depth was inversely proportional to undrained shear strength in clay. The subgouge displacements measured did not show a relationship with the gouge depth, width or soil density in sand and clay tests.

  12. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  13. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  14. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    Science.gov (United States)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  15. Global estimate of lichen and bryophyte contributions to forest precipitation interception

    Science.gov (United States)

    Van Stan, John; Porada, Philipp; Kleidon, Axel

    2017-04-01

    Interception of precipitation by forest canopies plays an important role in its partitioning to evaporation, transpiration and runoff. Field observations show arboreal lichens and bryophytes can substantially enhance forests' precipitation storage and evaporation. However, representations of canopy interception in global land surface models currently ignore arboreal lichen and bryophyte contributions. This study uses the lichen and bryophyte model (LiBry) to provide the first process-based modelling approach estimating these organisms' contributions to canopy water storage and evaporation. The global mean value of forest water storage capacity increased significantly from 0.87 mm to 1.33 mm by the inclusion of arboreal poikilohydric organisms. Global forest canopy evaporation of intercepted precipitation was also greatly enhanced by 44%. Ratio of total versus bare canopy global evaporation exceeded 2 in many forested regions. This altered global patterns in canopy water storage, evaporation, and ultimately the proportion of rainfall evaporated. A sensitivity analysis was also performed. Results indicate rainfall interception is of larger magnitude than previously reported by global land surface modelling work because of the important role of lichen and bryophytes in rainfall interception.

  16. The Relative Performance of High Resolution Quantitative Precipitation Estimates in the Russian River Basin

    Science.gov (United States)

    Bytheway, J. L.; Biswas, S.; Cifelli, R.; Hughes, M.

    2017-12-01

    The Russian River carves a 110 mile path through Mendocino and Sonoma counties in western California, providing water for thousands of residents and acres of agriculture as well as a home for several species of endangered fish. The Russian River basin receives almost all of its precipitation during the October through March wet season, and the systems bringing this precipitation are often impacted by atmospheric river events as well as the complex topography of the region. This study will examine the performance of several high resolution (hourly, products and forecasts over the 2015-2016 and 2016-2017 wet seasons. Comparisons of event total rainfall as well as hourly rainfall will be performed using 1) rain gauges operated by the National Oceanic and Atmospheric Administration (NOAA) Physical Sciences Division (PSD), 2) products from the Multi-Radar/Multi-Sensor (MRMS) QPE dataset, and 3) quantitative precipitation forecasts from the High Resolution Rapid Refresh (HRRR) model at 1, 3, 6, and 12 hour lead times. Further attention will be given to cases or locations representing large disparities between the estimates.

  17. Global precipitations and climate change. Proceedings

    International Nuclear Information System (INIS)

    Desbois, M.; Desalmand, F.

    1994-01-01

    The workshop reviewed the present status of knowledge concerning the past and present evolution of the distribution of precipitations at global scale, related to climate evolution at different time scales. This review was intended to assess the availability and quality of data which could help, through validation and initialization of model studies, to improve our understanding of the processes determining these precipitation changes. On another hand, the modelling specialists presented their actual use of precipitation data. Exchanges of views between the modelling and observing communities were thus made possible, leading to a set of recommendations for future studies. Sessions were then devoted to specific themes: 1) Paleoclimatology, 2) data collection, history and statistics, programmes, 3) methodologies and accuracy of large scale estimation of precipitation from conventional data, 4) estimation of precipitation from satellite data, 5) modelling studies. (orig.)

  18. Characterizing unknown systematics in large scale structure surveys

    International Nuclear Information System (INIS)

    Agarwal, Nishant; Ho, Shirley; Myers, Adam D.; Seo, Hee-Jong; Ross, Ashley J.; Bahcall, Neta; Brinkmann, Jonathan; Eisenstein, Daniel J.; Muna, Demitri; Palanque-Delabrouille, Nathalie; Yèche, Christophe; Pâris, Isabelle; Petitjean, Patrick; Schneider, Donald P.; Streblyanska, Alina; Weaver, Benjamin A.

    2014-01-01

    Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data, we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study

  19. Characterizing unknown systematics in large scale structure surveys

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Nishant; Ho, Shirley [McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Seo, Hee-Jong [Berkeley Center for Cosmological Physics, LBL and Department of Physics, University of California, Berkeley, CA 94720 (United States); Ross, Ashley J. [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth, PO1 3FX (United Kingdom); Bahcall, Neta [Princeton University Observatory, Peyton Hall, Princeton, NJ 08544 (United States); Brinkmann, Jonathan [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349 (United States); Eisenstein, Daniel J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Muna, Demitri [Department of Astronomy, Ohio State University, Columbus, OH 43210 (United States); Palanque-Delabrouille, Nathalie; Yèche, Christophe [CEA, Centre de Saclay, Irfu/SPP, F-91191 Gif-sur-Yvette (France); Pâris, Isabelle [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago (Chile); Petitjean, Patrick [Université Paris 6 et CNRS, Institut d' Astrophysique de Paris, 98bis blvd. Arago, 75014 Paris (France); Schneider, Donald P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Streblyanska, Alina [Instituto de Astrofisica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Weaver, Benjamin A., E-mail: nishanta@andrew.cmu.edu [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2014-04-01

    Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data, we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study.

  20. Large Scale Testing of Drystone Retaining Structures

    OpenAIRE

    Mundell, Chris

    2009-01-01

    Drystone walls have been used extensively around the world as earth retaining structures wherever suitable stone is found. Commonly about 0.6m thick (irrespective of height), there are about 9000km of drystone retaining walls on the UK road network alone, mostly built in the 19th and early 20th centuries, with an estimated replacement value in excess of £1 billion[1]. Drystone wall design is traditionally empirical, based on local knowledge of what has worked in the past. Methods vary from re...

  1. How the Internet Will Help Large-Scale Assessment Reinvent Itself

    Directory of Open Access Journals (Sweden)

    Randy Elliot Bennett

    2001-02-01

    Full Text Available Large-scale assessment in the United States is undergoing enormous pressure to change. That pressure stems from many causes. Depending upon the type of test, the issues precipitating change include an outmoded cognitive-scientific basis for test design; a mismatch with curriculum; the differential performance of population groups; a lack of information to help individuals improve; and inefficiency. These issues provide a strong motivation to reconceptualize both the substance and the business of large-scale assessment. At the same time, advances in technology, measurement, and cognitive science are providing the means to make that reconceptualization a reality. The thesis of this paper is that the largest facilitating factor will be technological, in particular the Internet. In the same way that it is already helping to revolutionize commerce, education, and even social interaction, the Internet will help revolutionize the business and substance of large-scale assessment.

  2. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  3. Revising the potential of large-scale Jatropha oil production in Tanzania: An economic land evaluation assessment

    International Nuclear Information System (INIS)

    Segerstedt, Anna; Bobert, Jans

    2013-01-01

    Following up the rather sobering results of the biofuels boom in Tanzania, we analyze the preconditions that would make large-scale oil production from the feedstock Jatropha curcas viable. We do this by employing an economic land evaluation approach; first, we estimate the physical land suitability and the necessary inputs to reach certain amounts of yields. Subsequently, we estimate costs and benefits for different input-output levels. Finally, to incorporate the increased awareness of sustainability in the export sector, we introduce also certification criteria. Using data from an experimental farm in Kilosa, we find that high yields are crucial for the economic feasibility and that they can only be obtained on good soils at high input rates. Costs of compliance with certification criteria depend on site specific characteristics such as land suitability and precipitation. In general, both domestic production and (certified) exports are too expensive to be able to compete with conventional diesel/rapeseed oil from the EU. Even though the crop may have potential for large scale production as a niche product, there is still a lot of risk involved and more experimental research is needed. - Highlights: ► We use an economic land evaluation analysis to reassess the potential of large-scale Jatropha oil. ► High yields are possible only at high input rates and for good soil qualities. ► Production costs are still too high to break even on the domestic and export market. ► More research is needed to stabilize yields and improve the oil content. ► Focus should be on broadening our knowledge-base rather than promoting new Jatropha investments

  4. Precipitation estimates and comparison of satellite rainfall data to in situ rain gauge observations to further develop the watershed-modeling capabilities for the Lower Mekong River Basin

    Science.gov (United States)

    Dandridge, C.; Lakshmi, V.; Sutton, J. R. P.; Bolten, J. D.

    2017-12-01

    This study focuses on the lower region of the Mekong River Basin (MRB), an area including Burma, Cambodia, Vietnam, Laos, and Thailand. This region is home to expansive agriculture that relies heavily on annual precipitation over the basin for its prosperity. Annual precipitation amounts are regulated by the global monsoon system and therefore vary throughout the year. This research will lead to improved prediction of floods and management of floodwaters for the MRB. We compare different satellite estimates of precipitation to each other and to in-situ precipitation estimates for the Mekong River Basin. These comparisons will help us determine which satellite precipitation estimates are better at predicting precipitation in the MRB and will help further our understanding of watershed-modeling capabilities for the basin. In this study we use: 1) NOAA's PERSIANN daily 0.25° precipitation estimate Climate Data Record (CDR), 2) NASA's Tropical Rainfall Measuring Mission (TRMM) daily 0.25° estimate, and 3) NASA's Global Precipitation Measurement (GPM) daily 0.1 estimate and 4) 488 in-situ stations located in the lower MRB provide daily precipitation estimates. The PERSIANN CDR precipitation estimate was able to provide the longest data record because it is available from 1983 to present. The TRMM precipitation estimate is available from 2000 to present and the GPM precipitation estimates are available from 2015 to present. It is for this reason that we provide several comparisons between our precipitation estimates. Comparisons were done between each satellite product and the in-situ precipitation estimates based on geographical location and date using the entire available data record for each satellite product for daily, monthly, and yearly precipitation estimates. We found that monthly PERSIANN precipitation estimates were able to explain up to 90% of the variability in station precipitation depending on station location.

  5. Recent Progress on the Second Generation CMORPH: LEO-IR Based Precipitation Estimates and Cloud Motion Vector

    Science.gov (United States)

    Xie, Pingping; Joyce, Robert; Wu, Shaorong

    2015-04-01

    As reported at the EGU General Assembly of 2014, a prototype system was developed for the second generation CMORPH to produce global analyses of 30-min precipitation on a 0.05olat/lon grid over the entire globe from pole to pole through integration of information from satellite observations as well as numerical model simulations. The second generation CMORPH is built upon the Kalman Filter based CMORPH algorithm of Joyce and Xie (2011). Inputs to the system include rainfall and snowfall rate retrievals from passive microwave (PMW) measurements aboard all available low earth orbit (LEO) satellites, precipitation estimates derived from infrared (IR) observations of geostationary (GEO) as well as LEO platforms, and precipitation simulations from numerical global models. Key to the success of the 2nd generation CMORPH, among a couple of other elements, are the development of a LEO-IR based precipitation estimation to fill in the polar gaps and objectively analyzed cloud motion vectors to capture the cloud movements of various spatial scales over the entire globe. In this presentation, we report our recent work on the refinement for these two important algorithm components. The prototype algorithm for the LEO IR precipitation estimation is refined to achieve improved quantitative accuracy and consistency with PMW retrievals. AVHRR IR TBB data from all LEO satellites are first remapped to a 0.05olat/lon grid over the entire globe and in a 30-min interval. Temporally and spatially co-located data pairs of the LEO TBB and inter-calibrated combined satellite PMW retrievals (MWCOMB) are then collected to construct tables. Precipitation at a grid box is derived from the TBB through matching the PDF tables for the TBB and the MWCOMB. This procedure is implemented for different season, latitude band and underlying surface types to account for the variations in the cloud - precipitation relationship. At the meantime, a sub-system is developed to construct analyzed fields of

  6. Extreme weather events in southern Germany - Climatological risk and development of a large-scale identification procedure

    Science.gov (United States)

    Matthies, A.; Leckebusch, G. C.; Rohlfing, G.; Ulbrich, U.

    2009-04-01

    Extreme weather events such as thunderstorms, hail and heavy rain or snowfall can pose a threat to human life and to considerable tangible assets. Yet there is a lack of knowledge about present day climatological risk and its economic effects, and its changes due to rising greenhouse gas concentrations. Therefore, parts of economy particularly sensitve to extreme weather events such as insurance companies and airports require regional risk-analyses, early warning and prediction systems to cope with such events. Such an attempt is made for southern Germany, in close cooperation with stakeholders. Comparing ERA40 and station data with impact records of Munich Re and Munich Airport, the 90th percentile was found to be a suitable threshold for extreme impact relevant precipitation events. Different methods for the classification of causing synoptic situations have been tested on ERA40 reanalyses. An objective scheme for the classification of Lamb's circulation weather types (CWT's) has proved to be most suitable for correct classification of the large-scale flow conditions. Certain CWT's have been turned out to be prone to heavy precipitation or on the other side to have a very low risk of such events. Other large-scale parameters are tested in connection with CWT's to find out a combination that has the highest skill to identify extreme precipitation events in climate model data (ECHAM5 and CLM). For example vorticity advection in 700 hPa shows good results, but assumes knowledge of regional orographic particularities. Therefore ongoing work is focused on additional testing of parameters that indicate deviations of a basic state of the atmosphere like the Eady Growth Rate or the newly developed Dynamic State Index. Evaluation results will be used to estimate the skill of the regional climate model CLM concerning the simulation of frequency and intensity of the extreme weather events. Data of the A1B scenario (2000-2050) will be examined for a possible climate change

  7. EPFM verification by a large scale test

    International Nuclear Information System (INIS)

    Okamura, H.; Yagawa, G.; Hidaka, T.; Sato, M.; Urabe, Y.; Iida, M.

    1993-01-01

    Step B test was carried out as one of the elastic plastic fracture mechanics (EPFR) study in Japanese PTS integrity research project. In step B test bending load was applied to the large flat specimen with thermal shock. Tensile load was kept constant during the test. Estimated stable crack growth at the deepest point of the crack was 3 times larger than the experimental value in the previous analysis. In order to diminish the difference between them from the point of FEM modeling, more precise FEM mesh was introduced. According to the new analysis, the difference considerably decreased. That is, stable crack growth evaluation was improved by adopting precise FEM model near the crack tip and the difference was almost same order as that in the NKS4-1 test analysis by MPA. 8 refs., 17 figs., 5 tabs

  8. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  9. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  10. Global and exponential attractors of the three dimensional viscous primitive equations of large-scale moist atmosphere

    OpenAIRE

    You, Bo; Li, Fang

    2016-01-01

    This paper is concerned with the long-time behavior of solutions for the three dimensional viscous primitive equations of large-scale moist atmosphere. We prove the existence of a global attractor for the three dimensional viscous primitive equations of large-scale moist atmosphere by asymptotic a priori estimate and construct an exponential attractor by using the smoothing property of the semigroup generated by the three dimensional viscous primitive equations of large-scale moist atmosphere...

  11. Extreme Precipitation Estimation with Typhoon Morakot Using Frequency and Spatial Analysis

    Directory of Open Access Journals (Sweden)

    Hone-Jay Chu

    2011-01-01

    Full Text Available Typhoon Morakot lashed Taiwan and produced copious amounts of precipitation in 2009. From the point view of hydrological statistics, the impact of the precipitation from typhoon Morakot using a frequency analysis can be analyzed and discussed. The frequency curve, which was fitted mathematically to historical observed data, can be used to estimate the probability of exceedance for runoff events of a certain magnitude. The study integrates frequency analysis and spatial analysis to assess the effect of Typhoon Morakot event on rainfall frequency in the Gaoping River basin of southern Taiwan. First, extreme rainfall data are collected at sixteen stations for durations of 1, 3, 6, 12, and 24 hours and then an appropriate probability distribution was selected to analyze the impact of the extreme hydrological event. Spatial rainfall patterns for a return period of 200-yr with 24-hr duration with and without Typhoon Morakot are estimated. Results show that the rainfall amount is significantly different with long duration with and without the event for frequency analysis. Furthermore, spatial analysis shows that extreme rainfall for a return period of 200-yr is highly dependent on topography and is smaller in the southwest than that in the east. The results not only demonstrate the distinct effect of Typhoon Morakot on frequency analysis, but also could provide reference in future planning of hydrological engineering.

  12. Depth-area-duration characteristics of storm rainfall in Texas using Multi-Sensor Precipitation Estimates

    Science.gov (United States)

    McEnery, J. A.; Jitkajornwanich, K.

    2012-12-01

    This presentation will describe the methodology and overall system development by which a benchmark dataset of precipitation information has been used to characterize the depth-area-duration relations in heavy rain storms occurring over regions of Texas. Over the past two years project investigators along with the National Weather Service (NWS) West Gulf River Forecast Center (WGRFC) have developed and operated a gateway data system to ingest, store, and disseminate NWS multi-sensor precipitation estimates (MPE). As a pilot project of the Integrated Water Resources Science and Services (IWRSS) initiative, this testbed uses a Standard Query Language (SQL) server to maintain a full archive of current and historic MPE values within the WGRFC service area. These time series values are made available for public access as web services in the standard WaterML format. Having this volume of information maintained in a comprehensive database now allows the use of relational analysis capabilities within SQL to leverage these multi-sensor precipitation values and produce a valuable derivative product. The area of focus for this study is North Texas and will utilize values that originated from the West Gulf River Forecast Center (WGRFC); one of three River Forecast Centers currently represented in the holdings of this data system. Over the past two decades, NEXRAD radar has dramatically improved the ability to record rainfall. The resulting hourly MPE values, distributed over an approximate 4 km by 4 km grid, are considered by the NWS to be the "best estimate" of rainfall. The data server provides an accepted standard interface for internet access to the largest time-series dataset of NEXRAD based MPE values ever assembled. An automated script has been written to search and extract storms over the 18 year period of record from the contents of this massive historical precipitation database. Not only can it extract site-specific storms, but also duration-specific storms and

  13. Analysis for preliminary evaluation of discrete fracture flow and large-scale permeability in sedimentary rocks

    International Nuclear Information System (INIS)

    Kanehiro, B.Y.; Lai, C.H.; Stow, S.H.

    1987-05-01

    Conceptual models for sedimentary rock settings that could be used in future evaluation and suitability studies are being examined through the DOE Repository Technology Program. One area of concern for the hydrologic aspects of these models is discrete fracture flow analysis as related to the estimation of the size of the representative elementary volume, evaluation of the appropriateness of continuum assumptions and estimation of the large-scale permeabilities of sedimentary rocks. A basis for preliminary analysis of flow in fracture systems of the types that might be expected to occur in low permeability sedimentary rocks is presented. The approach used involves numerical modeling of discrete fracture flow for the configuration of a large-scale hydrologic field test directed at estimation of the size of the representative elementary volume and large-scale permeability. Analysis of fracture data on the basis of this configuration is expected to provide a preliminary indication of the scale at which continuum assumptions can be made

  14. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  15. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  16. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This project will examine large-scale agricultural land acquisitions in nine West African countries -Burkina Faso, Guinea-Bissau, Guinea, Benin, Mali, Togo, Senegal, Niger, and Côte d'Ivoire. ... They will use the results to increase public awareness and knowledge about the consequences of large-scale land acquisitions.

  17. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  18. Radar rainfall estimation for the identification of debris-flow precipitation thresholds

    Science.gov (United States)

    Marra, Francesco; Nikolopoulos, Efthymios I.; Creutin, Jean-Dominique; Borga, Marco

    2014-05-01

    Identification of rainfall thresholds for the prediction of debris-flow occurrence is a common approach for warning procedures. Traditionally the debris-flow triggering rainfall is derived from the closest available raingauge. However, the spatial and temporal variability of intense rainfall on mountainous areas, where debris flows take place, may lead to large uncertainty in point-based estimates. Nikolopoulos et al. (2014) have shown that this uncertainty translates into a systematic underestimation of the rainfall thresholds, leading to a step degradation of the performances of the rainfall threshold for identification of debris flows occurrence under operational conditions. A potential solution to this limitation lies on use of rainfall estimates from weather radar. Thanks to their high spatial and temporal resolutions, these estimates offer the advantage of providing rainfall information over the actual debris flow location. The aim of this study is to analyze the value of radar precipitation estimations for the identification of debris flow precipitation thresholds. Seven rainfall events that triggered debris flows in the Adige river basin (Eastern Italian Alps) are analyzed using data from a dense raingauge network and a C-Band weather radar. Radar data are elaborated by using a set of correction algorithms specifically developed for weather radar rainfall application in mountainous areas. Rainfall thresholds for the triggering of debris flows are identified in the form of average intensity-duration power law curves using a frequentist approach by using both radar rainfall estimates and raingauge data. Sampling uncertainty associated to the derivation of the thresholds is assessed by using a bootstrap technique (Peruccacci et al. 2012). Results show that radar-based rainfall thresholds are largely exceeding those obtained by using raingauge data. Moreover, the differences between the two thresholds may be related to the spatial characteristics (i.e., spatial

  19. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  20. Simulation of fatigue crack growth under large scale yielding conditions

    Science.gov (United States)

    Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann

    2010-07-01

    A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.

  1. LARGE-SCALE INDICATIVE MAPPING OF SOIL RUNOFF

    Directory of Open Access Journals (Sweden)

    E. Panidi

    2017-11-01

    Full Text Available In our study we estimate relationships between quantitative parameters of relief, soil runoff regime, and spatial distribution of radioactive pollutants in the soil. The study is conducted on the test arable area located in basin of the upper Oka River (Orel region, Russia. Previously we collected rich amount of soil samples, which make it possible to investigate redistribution of the Chernobyl-origin cesium-137 in soil material and as a consequence the soil runoff magnitude at sampling points. Currently we are describing and discussing the technique applied to large-scale mapping of the soil runoff. The technique is based upon the cesium-137 radioactivity measurement in the different relief structures. Key stages are the allocation of the places for soil sampling points (we used very high resolution space imagery as a supporting data; soil samples collection and analysis; calibration of the mathematical model (using the estimated background value of the cesium-137 radioactivity; and automated compilation of the map (predictive map of the studied territory (digital elevation model is used for this purpose, and cesium-137 radioactivity can be predicted using quantitative parameters of the relief. The maps can be used as a support data for precision agriculture and for recultivation or melioration purposes.

  2. Results of large scale thyroid dose reconstruction in Ukraine

    International Nuclear Information System (INIS)

    Likhtarev, I.; Sobolev, B.; Kairo, I.; Tabachny, L.; Jacob, P.; Proehl, G.; Goulko, G.

    1996-01-01

    In 1993, the Ukrainian Ministry on Chernobyl Affairs initiated a large scale reconstruction of thyroid exposures to radioiodine after the Chernobyl accident. The objective was to provide the state policy on social compensations with a scientific background. About 7000 settlements from five contaminated regions have gotten certificates of thyroid exposure since then. Certificates contain estimates of the average thyroid dose from 131 I for seven age groups. The primary dose estimates used about 150000 direct measurements of the 131 I activity in the thyroid glands of inhabitants from Chernigiv, Kiev, Zhytomyr, and also Vinnytsa regions. Parameters of the assumed intake function were related to environmental and questionnaire data. The dose reconstruction for the remaining territory was based on empirical relations between intake function parameters and the 137 Cs deposition. The relationship was specified by the distance and the direction to the Chernobyl Nuclear Power Plant. The relations were first derived for territories with direct measurements and then they were spread on other areas using daily iodine releases and atmospheric transportation routes. The results of the dose reconstruction allowed to mark zones on the territory of Ukraine according to the average levels of thyroid exposures. These zones underlay a policy of post-accidental health care and social compensations. Another important application of the thyroid dose reconstruction is the radiation risk assessment of thyroid cancer among people exposed during childhood due to the Chernobyl accident

  3. Estimating and forecasting the precipitable water vapor from GOES satellite data at high altitude sites

    Science.gov (United States)

    Marín, Julio C.; Pozo, Diana; Curé, Michel

    2015-01-01

    In this work, we describe a method to estimate the precipitable water vapor (PWV) from Geostationary Observational Environmental Satellite (GOES) data at high altitude sites. The method was applied at Atacama Pathfinder Experiment (APEX) and Cerro Toco sites, located above 5000 m altitude in the Chajnantor plateau, in the north of Chile. It was validated using GOES-12 satellite data over the range 0-1.2 mm since submillimeter/millimeter astronomical observations are only useful within this PWV range. The PWV estimated from GOES and the Final Analyses (FNL) at APEX for 2007 and 2009 show root mean square error values of 0.23 mm and 0.36 mm over the ranges 0-0.4 mm and 0.4-1.2 mm, respectively. However, absolute relative errors of 51% and 33% were shown over these PWV ranges, respectively. We recommend using high-resolution thermodynamic profiles from the Global Forecast System (GFS) model to estimate the PWV from GOES data since they are available every three hours and at an earlier time than the FNL data. The estimated PWV from GOES/GFS agrees better with the observed PWV at both sites during night time. The largest errors are shown during daytime. Short-term PWV forecasts were implemented at both sites, applying a simple persistence method to the PWV estimated from GOES/GFS. The 12 h and 24 h PWV forecasts evaluated from August to October 2009 indicates that 25% of them show a very good agreement with observations whereas 50% of them show reasonably good agreement with observations. Transmission uncertainties calculated for PWV estimations and forecasts over the studied sites are larger over the range 0-0.4 mm than over the range 0.4-1.2 mm. Thus, the method can be used over the latter interval with more confidence.

  4. Evaluation of precipitation estimates over CONUS derived from satellite, radar, and rain gauge datasets (2002-2012)

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.

    2014-10-01

    We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, and surface observations to derive precipitation characteristics over CONUS for the period 2002-2012. This comparison effort includes satellite multi-sensor datasets (bias-adjusted TMPA 3B42, near-real time 3B42RT), radar estimates (NCEP Stage IV), and rain gauge observations. Remotely sensed precipitation datasets are compared with surface observations from the Global Historical Climatology Network (GHCN-Daily) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model). The comparisons are performed at the annual, seasonal, and daily scales over the River Forecast Centers (RFCs) for CONUS. Annual average rain rates present a satisfying agreement with GHCN-D for all products over CONUS (± 6%). However, differences at the RFC are more important in particular for near-real time 3B42RT precipitation estimates (-33 to +49%). At annual and seasonal scales, the bias-adjusted 3B42 presented important improvement when compared to its near real time counterpart 3B42RT. However, large biases remained for 3B42 over the Western US for higher average accumulation (≥ 5 mm day-1) with respect to GHCN-D surface observations. At the daily scale, 3B42RT performed poorly in capturing extreme daily precipitation (> 4 in day-1) over the Northwest. Furthermore, the conditional analysis and the contingency analysis conducted illustrated the challenge of retrieving extreme precipitation from remote sensing estimates.

  5. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    Science.gov (United States)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the

  6. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    Science.gov (United States)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out

  7. Parameter estimation using the genetic algorithm and its impact on quantitative precipitation forecast

    Directory of Open Access Journals (Sweden)

    Y. H. Lee

    2006-12-01

    Full Text Available In this study, optimal parameter estimations are performed for both physical and computational parameters in a mesoscale meteorological model, and their impacts on the quantitative precipitation forecasting (QPF are assessed for a heavy rainfall case occurred at the Korean Peninsula in June 2005. Experiments are carried out using the PSU/NCAR MM5 model and the genetic algorithm (GA for two parameters: the reduction rate of the convective available potential energy in the Kain-Fritsch (KF scheme for cumulus parameterization, and the Asselin filter parameter for numerical stability. The fitness function is defined based on a QPF skill score. It turns out that each optimized parameter significantly improves the QPF skill. Such improvement is maximized when the two optimized parameters are used simultaneously. Our results indicate that optimizations of computational parameters as well as physical parameters and their adequate applications are essential in improving model performance.

  8. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and

  9. An operational weather radar-based Quantitative Precipitation Estimation and its application in catchment water resources modeling

    DEFF Research Database (Denmark)

    He, Xin; Vejen, Flemming; Stisen, Simon

    2011-01-01

    of precipitation compared with rain-gauge-based methods, thus providing the basis for better water resources assessments. The radar QPE algorithm called ARNE is a distance-dependent areal estimation method that merges radar data with ground surface observations. The method was applied to the Skjern River catchment...... in western Denmark where alternative precipitation estimates were also used as input to an integrated hydrologic model. The hydrologic responses from the model were analyzed by comparing radar- and ground-based precipitation input scenarios. Results showed that radar QPE products are able to generate...... reliable simulations of stream flow and water balance. The potential of using radar-based precipitation was found to be especially high at a smaller scale, where the impact of spatial resolution was evident from the stream discharge results. Also, groundwater recharge was shown to be sensitive...

  10. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    Science.gov (United States)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  11. Using damage data to estimate the risk from summer convective precipitation extremes

    Science.gov (United States)

    Schroeer, Katharina; Tye, Mari

    2017-04-01

    model to test whether the relationship between extreme rainfall events and damages is robust enough to estimate a potential underrepresentation of high intensity rainfall events in ungauged areas. Risk-relevant factors of socio-economic vulnerability, land cover, streamflow data, and weather type information are included to improve and sharpen the analysis. Within this study, we first aim to identify which rainfall events are most damaging and which factors affect the damages - seen as a proxy for the vulnerability - related to summer convective rainfall extremes in different catchment types. Secondly, we aim to detect potentially unreported damaging rainfall events and estimate the likelihood of such cases. We anticipate this damage perspective on summertime extreme convective precipitation to be beneficial for risk assessment, uncertainty management, and decision making with respect to weather and climate extremes on the regional-to-local level.

  12. Improving Radar Quantitative Precipitation Estimation over Complex Terrain in the San Francisco Bay Area

    Science.gov (United States)

    Cifelli, R.; Chen, H.; Chandrasekar, V.

    2017-12-01

    A recent study by the State of California's Department of Water Resources has emphasized that the San Francisco Bay Area is at risk of catastrophic flooding. Therefore, accurate quantitative precipitation estimation (QPE) and forecast (QPF) are critical for protecting life and property in this region. Compared to rain gauge and meteorological satellite, ground based radar has shown great advantages for high-resolution precipitation observations in both space and time domain. In addition, the polarization diversity shows great potential to characterize precipitation microphysics through identification of different hydrometeor types and their size and shape information. Currently, all the radars comprising the U.S. National Weather Service (NWS) Weather Surveillance Radar-1988 Doppler (WSR-88D) network are operating in dual-polarization mode. Enhancement of QPE is one of the main considerations of the dual-polarization upgrade. The San Francisco Bay Area is covered by two S-band WSR-88D radars, namely, KMUX and KDAX. However, in complex terrain like the Bay Area, it is still challenging to obtain an optimal rainfall algorithm for a given set of dual-polarization measurements. In addition, the accuracy of rain rate estimates is contingent on additional factors such as bright band contamination, vertical profile of reflectivity (VPR) correction, and partial beam blockages. This presentation aims to improve radar QPE for the Bay area using advanced dual-polarization rainfall methodologies. The benefit brought by the dual-polarization upgrade of operational radar network is assessed. In addition, a pilot study of gap fill X-band radar performance is conducted in support of regional QPE system development. This paper also presents a detailed comparison between the dual-polarization radar-derived rainfall products with various operational products including the NSSL's Multi-Radar/Multi-Sensor (MRMS) system. Quantitative evaluation of various rainfall products is achieved

  13. Atmospheric water vapor transport: Estimation of continental precipitation recycling and parameterization of a simple climate model. M.S. Thesis

    Science.gov (United States)

    Brubaker, Kaye L.; Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    The advective transport of atmospheric water vapor and its role in global hydrology and the water balance of continental regions are discussed and explored. The data set consists of ten years of global wind and humidity observations interpolated onto a regular grid by objective analysis. Atmospheric water vapor fluxes across the boundaries of selected continental regions are displayed graphically. The water vapor flux data are used to investigate the sources of continental precipitation. The total amount of water that precipitates on large continental regions is supplied by two mechanisms: (1) advection from surrounding areas external to the region; and (2) evaporation and transpiration from the land surface recycling of precipitation over the continental area. The degree to which regional precipitation is supplied by recycled moisture is a potentially significant climate feedback mechanism and land surface-atmosphere interaction, which may contribute to the persistence and intensification of droughts. A simplified model of the atmospheric moisture over continents and simultaneous estimates of regional precipitation are employed to estimate, for several large continental regions, the fraction of precipitation that is locally derived. In a separate, but related, study estimates of ocean to land water vapor transport are used to parameterize an existing simple climate model, containing both land and ocean surfaces, that is intended to mimic the dynamics of continental climates.

  14. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  15. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  16. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  17. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    Science.gov (United States)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  18. Inflation and large scale structure formation after COBE

    International Nuclear Information System (INIS)

    Schaefer, R.K.; Shafi, Q.

    1992-06-01

    The simplest realizations of the new inflationary scenario typically give rise to primordial density fluctuations which deviate logarithmically from the scale free Harrison-Zeldovich spectrum. We consider a number of such examples and, in each case we normalize the amplitude of the fluctuations with the recent COBE measurement of the microwave background anisotropy. The predictions for the bulk velocities as well as anisotropies on smaller (1-2 degrees) angular scales are compared with the Harrison-Zeldovich case. Deviations from the latter range from a few to about 15 percent. We also estimate the redshift beyond which the quasars would not be expected to be seen. The inflationary quasar cutoff redshifts can vary by as much as 25% from the Harrison-Zeldovich case. We find that the inflationary scenario provides a good starting point for a theory of large scale structure in the universe provided the dark matter is a combination of cold plus (10-30%) hot components. (author). 27 refs, 1 fig., 1 tab

  19. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K. [eds.

    1994-05-01

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  20. Locating inefficient links in a large-scale transportation network

    Science.gov (United States)

    Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu

    2015-02-01

    Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.

  1. Applicability of vector processing to large-scale nuclear codes

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Harada, Hiroo; Matsuura, Toshihiko; Okuda, Motoi; Ohta, Fumio; Umeya, Makoto.

    1982-03-01

    To meet the growing trend of computational requirements in JAERI, introduction of a high-speed computer with vector processing faculty (a vector processor) is desirable in the near future. To make effective use of a vector processor, appropriate optimization of nuclear codes to pipelined-vector architecture is vital, which will pose new problems concerning code development and maintenance. In this report, vector processing efficiency is assessed with respect to large-scale nuclear codes by examining the following items: 1) The present feature of computational load in JAERI is analyzed by compiling the computer utilization statistics. 2) Vector processing efficiency is estimated for the ten heavily-used nuclear codes by analyzing their dynamic behaviors run on a scalar machine. 3) Vector processing efficiency is measured for the other five nuclear codes by using the current vector processors, FACOM 230-75 APU and CRAY-1. 4) Effectiveness of applying a high-speed vector processor to nuclear codes is evaluated by taking account of the characteristics in JAERI jobs. Problems of vector processors are also discussed from the view points of code performance and ease of use. (author)

  2. Biotechnological lignite conversion - a large-scale concept

    Energy Technology Data Exchange (ETDEWEB)

    Reich-Walber, M.; Meyrahn, H.; Felgener, G.W. [Rheinbraun AG, Koeln (Germany). Fuel Technology and Lab. Dept.

    1997-12-31

    Concerning the research on biotechnological lignite upgrading, Rheinbraun`s overall objective is the large-scale production of liquid and gaseous products for the energy and chemical/refinery sectors. The presentation outlines Rheinbraun`s technical concept for electricity production on the basis of biotechnologically solubilized lignite. A first rough cost estimate based on the assumptions described in the paper in detail and compared with the latest power plant generation shows the general cost efficiency of this technology despite the additional costs in respect of coal solubilization. The main reasons are low-cost process techniques for coal conversion on the one hand and cost reductions mainly in power plant technology (more efficient combustion processes and simplified gas clean-up) but also in coal transport (easy fuel handling) on the other hand. Moreover, it is hoped that an extended range of products will make it possible to widen the fields of lignite application. The presentation also points out that there is still a huge gap between this scenario and reality by limited microbiological knowledge. To close this gap Rheinbraun started a research project supported by the North-Rhine Westphalian government in 1995. Several leading biotechnological companies and institutes in Germany and the United States are involved in the project. The latest results of the current project will be presented in the paper. This includes fundamental research activities in the field of microbial coal conversion as well as investigations into bioreactor design and product treatment (dewatering, deashing and desulphurization). (orig.)

  3. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    Science.gov (United States)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  4. The Signature of Southern Hemisphere Atmospheric Circulation Patterns in Antarctic Precipitation.

    Science.gov (United States)

    Marshall, Gareth J; Thompson, David W J; van den Broeke, Michiel R

    2017-11-28

    We provide the first comprehensive analysis of the relationships between large-scale patterns of Southern Hemisphere climate variability and the detailed structure of Antarctic precipitation. We examine linkages between the high spatial resolution precipitation from a regional atmospheric model and four patterns of large-scale Southern Hemisphere climate variability: the southern baroclinic annular mode, the southern annular mode, and the two Pacific-South American teleconnection patterns. Variations in all four patterns influence the spatial configuration of precipitation over Antarctica, consistent with their signatures in high-latitude meridional moisture fluxes. They impact not only the mean but also the incidence of extreme precipitation events. Current coupled-climate models are able to reproduce all four patterns of atmospheric variability but struggle to correctly replicate their regional impacts on Antarctic climate. Thus, linking these patterns directly to Antarctic precipitation variability may allow a better estimate of future changes in precipitation than using model output alone.

  5. Quantitative precipitation estimation in complex orography using quasi-vertical profiles of dual polarization radar variables

    Science.gov (United States)

    Montopoli, Mario; Roberto, Nicoletta; Adirosi, Elisa; Gorgucci, Eugenio; Baldini, Luca

    2017-04-01

    Weather radars are nowadays a unique tool to estimate quantitatively the rain precipitation near the surface. This is an important task for a plenty of applications. For example, to feed hydrological models, mitigate the impact of severe storms at the ground using radar information in modern warning tools as well as aid the validation studies of satellite-based rain products. With respect to the latter application, several ground validation studies of the Global Precipitation Mission (GPM) products have recently highlighted the importance of accurate QPE from ground-based weather radars. To date, a plenty of works analyzed the performance of various QPE algorithms making use of actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization variables not only to ensure a good level of radar data quality but also as a direct input in the rain estimation equations. Among others, one of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution that affects at different levels, all the radar variables acquired as well as rain rates. This is particularly impactful in mountainous areas where the altitudes of the radar sampling is likely several hundred of meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested a in complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that make use of the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered because in that case all the radar variables used in the rain estimation process should be consistently extrapolated at the surface

  6. Oxygen and Hydrogen Isotopes of Precipitation in a Rocky Mountainous Area of Beijing to Distinguish and Estimate Spring Recharge

    Directory of Open Access Journals (Sweden)

    Ziqiang Liu

    2018-05-01

    Full Text Available Stable isotopes of oxygen and hydrogen were used to estimate seasonal contributions of precipitation to natural spring recharge in Beijing’s mountainous area. Isotopic compositions were shown to be more positive in the dry season and more negative in the wet season, due to the seasonal patterns in the amount of precipitation. The local meteoric water line (LMWL was δ2H = 7.0 δ18O − 2.3 for the dry season and δ2H = 5.9 δ18O − 10.4 for the wet season. LMWL in the two seasons had a lower slope and intercept than the Global Meteoric Water Line (p < 0.01. The slope and intercept of the LMWL in the wet season were lower than that in the dry season because of the effect of precipitation amount during the wet season (p < 0.01. The mean precipitation effects of −15‰ and −2‰ per 100 mm change in the amount of precipitation for δ2H and δ18O, respectively, were obtained from the monthly total precipitation and its average isotopic value. The isotopic composition of precipitation decreased when precipitation duration increased. Little changes in the isotopic composition of the natural spring were found. By employing isotope conservation of mass, it could be derived that, on average, approximately 7.2% of the natural spring came from the dry season precipitation and the rest of 92.8% came from the wet season precipitation.

  7. Large-scale land transformations in Indonesia: The role of ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... enable timely responses to the impacts of large-scale land transformations in Central Kalimantan ... In partnership with UNESCO's Organization for Women in Science for the ... New funding opportunity for gender equality and climate change.

  8. Resolute large scale mining company contribution to health services of

    African Journals Online (AJOL)

    Resolute large scale mining company contribution to health services of Lusu ... in terms of socio economic, health, education, employment, safe drinking water, ... The data were analyzed using Scientific Package for Social Science (SPSS).

  9. Personalized Opportunistic Computing for CMS at Large Scale

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **Douglas Thain** is an Associate Professor of Computer Science and Engineering at the University of Notre Dame, where he designs large scale distributed computing systems to power the needs of advanced science and...

  10. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    fuel/energy, climate, and finance has occurred and one of the most ... this wave of large-scale land acquisitions. In fact, esti- ... Environmental Rights Action/Friends of the Earth,. Nigeria ... map the differentiated impacts (gender, ethnicity,.

  11. Large-scale linear programs in planning and prediction.

    Science.gov (United States)

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  12. Bottom-Up Accountability Initiatives and Large-Scale Land ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... Security can help increase accountability for large-scale land acquisitions in ... to build decent economic livelihoods and participate meaningfully in decisions ... its 2017 call for proposals to establish Cyber Policy Centres in the Global South.

  13. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  14. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  15. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  16. No Large Scale Curvature Perturbations during Waterfall of Hybrid Inflation

    OpenAIRE

    Abolhasani, Ali Akbar; Firouzjahi, Hassan

    2010-01-01

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back-reactions to terminate inflation. If one considers only the clas...

  17. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  18. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    pump flow meters, sediment trap weigh tanks , and beach profiling lidar. A detailed discussion of the original LSTF features and capabilities can be...ERDC/CHL CHETN-I-88 April 2016 Approved for public release; distribution is unlimited. Capabilities of the Large-Scale Sediment Transport...describes the Large-Scale Sediment Transport Facility (LSTF) and recent upgrades to the measurement systems. The purpose of these upgrades was to increase

  19. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  20. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  1. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    Science.gov (United States)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential

  2. Impact of Precipitating Ice Hydrometeors on Longwave Radiative Effect Estimated by a Global Cloud-System Resolving Model

    Science.gov (United States)

    Chen, Ying-Wen; Seiki, Tatsuya; Kodama, Chihiro; Satoh, Masaki; Noda, Akira T.

    2018-02-01

    Satellite observation and general circulation model (GCM) studies suggest that precipitating ice makes nonnegligible contributions to the radiation balance of the Earth. However, in most GCMs, precipitating ice is diagnosed and its radiative effects are not taken into account. Here we examine the longwave radiative impact of precipitating ice using a global nonhydrostatic atmospheric model with a double-moment cloud microphysics scheme. An off-line radiation model is employed to determine cloud radiative effects according to the amount and altitude of each type of ice hydrometeor. Results show that the snow radiative effect reaches 2 W m-2 in the tropics, which is about half the value estimated by previous studies. This effect is strongly dependent on the vertical separation of ice categories and is partially generated by differences in terminal velocities, which are not represented in GCMs with diagnostic precipitating ice. Results from sensitivity experiments that artificially change the categories and altitudes of precipitating ice show that the simulated longwave heating profile and longwave radiation field are sensitive to the treatment of precipitating ice in models. This study emphasizes the importance of incorporating appropriate treatments for the radiative effects of precipitating ice in cloud and radiation schemes in GCMs in order to capture the cloud radiative effects of upper level clouds.

  3. GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S

    2008-05-28

    Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.

  4. ESTIMATION OF PHASE DELAY DUE TO PRECIPITABLE WATER FOR DINSARBASED LAND DEFORMATION MONITORING

    Directory of Open Access Journals (Sweden)

    J. Susaki

    2017-09-01

    Full Text Available In this paper, we present a method for using the estimated precipitable water (PW to mitigate atmospheric phase delay in order to improve the accuracy of land-deformation assessment with differential interferometric synthetic aperture radar (DInSAR. The phase difference obtained from multi-temporal synthetic aperture radar images contains errors of several types, and the atmospheric phase delay can be an obstacle to estimating surface subsidence. In this study, we calculate PW from external meteorological data. Firstly, we interpolate the data with regard to their spatial and temporal resolutions. Then, assuming a range direction between a target pixel and the sensor, we derive the cumulative amount of differential PW at the height of the slant range vector at pixels along that direction. The atmospheric phase delay of each interferogram is acquired by taking a residual after a preliminary determination of the linear deformation velocity and digital elevation model (DEM error, and by applying high-pass temporal and low-pass spatial filters. Next, we estimate a regression model that connects the cumulative amount of PW and the atmospheric phase delay. Finally, we subtract the contribution of the atmospheric phase delay from the phase difference of the interferogram, and determine the linear deformation velocity and DEM error. The experimental results show a consistent relationship between the cumulative amount of differential PW and the atmospheric phase delay. An improvement in land-deformation accuracy is observed at a point at which the deformation is relatively large. Although further investigation is necessary, we conclude at this stage that the proposed approach has the potential to improve the accuracy of the DInSAR technique.

  5. Explore the Usefulness of Person-Fit Analysis on Large-Scale Assessment

    Science.gov (United States)

    Cui, Ying; Mousavi, Amin

    2015-01-01

    The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…

  6. Non-parametric co-clustering of large scale sparse bipartite networks on the GPU

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mørup, Morten; Hansen, Lars Kai

    2011-01-01

    of row and column clusters from a hypothesis space of an infinite number of clusters. To reach large scale applications of co-clustering we exploit that parameter inference for co-clustering is well suited for parallel computing. We develop a generic GPU framework for efficient inference on large scale...... sparse bipartite networks and achieve a speedup of two orders of magnitude compared to estimation based on conventional CPUs. In terms of scalability we find for networks with more than 100 million links that reliable inference can be achieved in less than an hour on a single GPU. To efficiently manage...

  7. Analysis of environmental impact assessment for large-scale X-ray medical equipments

    International Nuclear Information System (INIS)

    Fu Jin; Pei Chengkai

    2011-01-01

    Based on an Environmental Impact Assessment (EIA) project, this paper elaborates the basic analysis essentials of EIA for the sales project of large-scale X-ray medical equipment, and provides the analysis procedure of environmental impact and dose estimation method under normal and accident conditions. The key points of EIA for the sales project of large-scale X-ray medical equipment include the determination of pollution factor and management limit value according to the project's actual situation, the utilization of various methods of assessment and prediction such as analogy, actual measurement and calculation to analyze, monitor, calculate and predict the pollution during normal and accident condition. (authors)

  8. Investigation of Weather Radar Quantitative Precipitation Estimation Methodologies in Complex Orography

    Directory of Open Access Journals (Sweden)

    Mario Montopoli

    2017-02-01

    Full Text Available Near surface quantitative precipitation estimation (QPE from weather radar measurements is an important task for feeding hydrological models, limiting the impact of severe rain events at the ground as well as aiding validation studies of satellite-based rain products. To date, several works have analyzed the performance of various QPE algorithms using actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization radar variables not only to ensure a good level of data quality but also as a direct input to rain estimation equations. One of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution, which affects all the acquired radar variables as well as estimated rain rates at different levels. This is particularly impactful in mountainous areas, where the sampled altitudes are likely several hundred meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested in a complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that use the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered. In that case, all the radar variables used in the rain estimation process should be consistently extrapolated at the surface to try and maintain the correlations among them. To avoid facing such a complexity, especially with a view to operational implementation, we propose looking at the features of the vertical profile of rain (VPR, i.e., after performing the rain estimation. This procedure allows characterization of a single variable (i.e., rain when dealing with

  9. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  10. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  11. Processing of next generation weather radar-multisensor precipitation estimates and quantitative precipitation forecast data for the DuPage County streamflow simulation system

    Science.gov (United States)

    Bera, Maitreyee; Ortel, Terry W.

    2018-01-12

    The U.S. Geological Survey, in cooperation with DuPage County Stormwater Management Department, is testing a near real-time streamflow simulation system that assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek and West Branch DuPage River drainage basins in DuPage County, Illinois. As part of this effort, the U.S. Geological Survey maintains a database of hourly meteorological and hydrologic data for use in this near real-time streamflow simulation system. Among these data are next generation weather radar-multisensor precipitation estimates and quantitative precipitation forecast data, which are retrieved from the North Central River Forecasting Center of the National Weather Service. The DuPage County streamflow simulation system uses these quantitative precipitation forecast data to create streamflow predictions for the two simulated drainage basins. This report discusses in detail how these data are processed for inclusion in the Watershed Data Management files used in the streamflow simulation system for the Salt Creek and West Branch DuPage River drainage basins.

  12. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Introducing uncertainty of radar-rainfall estimates to the verification of mesoscale model precipitation forecasts

    Directory of Open Access Journals (Sweden)

    M. P. Mittermaier

    2008-05-01

    Full Text Available A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used.

    The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.

  14. Quantitative Precipitation Estimation over Ocean Using Bayesian Approach from Microwave Observations during the Typhoon Season

    Directory of Open Access Journals (Sweden)

    Jen-Chi Hu

    2009-01-01

    Full Text Available We have developed a new Bayesian approach to retrieve oceanic rain rate from the Tropical Rainfall Measuring Mission (TRMM Microwave Imager (TMI, with an emphasis on typhoon cases in the West Pacific. Retrieved rain rates are validated with measurements of rain gauges located on Japanese islands. To demonstrate improvement, retrievals are also compared with those from the TRMM/Precipitation Radar (PR, the Goddard Profiling Algorithm (GPROF, and a multi-channel linear regression statistical method (MLRS. We have found that qualitatively, all methods retrieved similar horizontal distributions in terms of locations of eyes and rain bands of typhoons. Quantitatively, our new Bayesian retrievals have the best linearity and the smallest root mean square (RMS error against rain gauge data for 16 typhoon over passes in 2004. The correlation coefficient and RMS of our retrievals are 0.95 and ~2 mm hr-1, respectively. In particular, at heavy rain rates, our Bayesian retrievals out perform those retrieved from GPROF and MLRS. Over all, the new Bayesian approach accurately retrieves surface rain rate for typhoon cases. Ac cu rate rain rate estimates from this method can be assimilated in models to improve forecast and prevent potential damages in Taiwan during typhoon seasons.

  15. On-line estimation of the dissolved zinc concentration during ZnS precipitation in a CSTR

    NARCIS (Netherlands)

    Grootscholten, T.I.M.; Keesman, K.J.; Lens, P.N.L.

    2007-01-01

    Abstract In this paper a method is presented to estimate the reaction term of zinc sulphide precipitation and the zinc concentration in a CSTR, using the read-out signal of a sulphide selective electrode. The reaction between zinc and sulphide is described by a non-linear model and therefore

  16. Where Does the Irrigation Water Go? An Estimate of the Contribution of Irrigation to Precipitation Using MERRA

    Science.gov (United States)

    Wei, Jiangfeng; Dirmeyer, Paul A.; Wisser, Dominik; Bosilovich, Michael G.; Mocko, David M.

    2013-01-01

    Irrigation is an important human activity that may impact local and regional climate, but current climate model simulations and data assimilation systems generally do not explicitly include it. The European Centre for Medium-Range Weather Forecasts (ECMWF) Interim Re-Analysis (ERA-Interim) shows more irrigation signal in surface evapotranspiration (ET) than the Modern-Era Retrospective Analysis for Research and Applications (MERRA) because ERA-Interim adjusts soil moisture according to the observed surface temperature and humidity while MERRA has no explicit consideration of irrigation at the surface. But, when compared with the results from a hydrological model with detailed considerations of agriculture, the ET from both reanalyses show large deficiencies in capturing the impact of irrigation. Here, a back-trajectory method is used to estimate the contribution of irrigation to precipitation over local and surrounding regions, using MERRA with observation-based corrections and added irrigation-caused ET increase from the hydrological model. Results show substantial contributions of irrigation to precipitation over heavily irrigated regions in Asia, but the precipitation increase is much less than the ET increase over most areas, indicating that irrigation could lead to water deficits over these regions. For the same increase in ET, precipitation increases are larger over wetter areas where convection is more easily triggered, but the percentage increase in precipitation is similar for different areas. There are substantial regional differences in the patterns of irrigation impact, but, for all the studied regions, the highest percentage contribution to precipitation is over local land.

  17. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models

    DEFF Research Database (Denmark)

    Borup, Morten; Mikkelsen, Peter Steen; Borup, Morten

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due...

  18. Modelling and on-line estimation of zinc sulphide precipitation in

    NARCIS (Netherlands)

    Grootscholten, T.I.M.; Keesman, K.J.; Lens, P.N.L.

    2008-01-01

    In this paper the ZnS precipitation in a continuously stirred tank reactor (CSTR) is modelled using mass balances. The dynamics analysis of the model reveals that the ZnS precipitation shows a two time-scales behaviour with inherent numerical stability problems, which therefore needs special

  19. The Effectiveness of Using Limited Gauge Measurements for Bias Adjustment of Satellite-Based Precipitation Estimation over Saudi Arabia

    Science.gov (United States)

    Alharbi, Raied; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2018-01-01

    Precipitation is a key input variable for hydrological and climate studies. Rain gauges are capable of providing reliable precipitation measurements at point scale. However, the uncertainty of rain measurements increases when the rain gauge network is sparse. Satellite -based precipitation estimations appear to be an alternative source of precipitation measurements, but they are influenced by systematic bias. In this study, a method for removing the bias from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) over a region where the rain gauge is sparse is investigated. The method consists of monthly empirical quantile mapping, climate classification, and inverse-weighted distance method. Daily PERSIANN-CCS is selected to test the capability of the method for removing the bias over Saudi Arabia during the period of 2010 to 2016. The first six years (2010 - 2015) are calibrated years and 2016 is used for validation. The results show that the yearly correlation coefficient was enhanced by 12%, the yearly mean bias was reduced by 93% during validated year. Root mean square error was reduced by 73% during validated year. The correlation coefficient, the mean bias, and the root mean square error show that the proposed method removes the bias on PERSIANN-CCS effectively that the method can be applied to other regions where the rain gauge network is sparse.

  20. Biofuel Development and Large-Scale Land Deals in Sub-Saharan Africa

    OpenAIRE

    Giorgia Giovannetti; Elisa Ticci

    2013-01-01

    Africa's biofuel potential over the last ten years has increasingly attracted foreign investors’ attention. We estimate the determinants of foreign investors land demand for biofuel production in SSA, using Poisson specifications of the gravity model. Our estimates suggest that land availability, abundance of water resources and weak land governance are significant determinants of large-scale land acquisitions for biofuel production. This in turn suggests that this type of investment is mainl...

  1. Statistical evaluation of the performance of gridded monthly precipitation products from reanalysis data, satellite estimates, and merged analyses over China

    Science.gov (United States)

    Deng, Xueliang; Nie, Suping; Deng, Weitao; Cao, Weihua

    2018-04-01

    In this study, we compared the following four different gridded monthly precipitation products: the National Centers for Environmental Prediction version 2 (NCEP-2) reanalysis data, the satellite-based Climate Prediction Center Morphing technique (CMORPH) data, the merged satellite-gauge Global Precipitation Climatology Project (GPCP) data, and the merged satellite-gauge-model data from the Beijing Climate Center Merged Estimation of Precipitation (BMEP). We evaluated the performances of these products using monthly precipitation observations spanning the period of January 2003 to December 2013 from a dense, national, rain gauge network in China. Our assessment involved several statistical techniques, including spatial pattern, temporal variation, bias, root-mean-square error (RMSE), and correlation coefficient (CC) analysis. The results show that NCEP-2, GPCP, and BMEP generally overestimate monthly precipitation at the national scale and CMORPH underestimates it. However, all of the datasets successfully characterized the northwest to southeast increase in the monthly precipitation over China. Because they include precipitation gauge information from the Global Telecommunication System (GTS) network, GPCP and BMEP have much smaller biases, lower RMSEs, and higher CCs than NCEP-2 and CMORPH. When the seasonal and regional variations are considered, NCEP-2 has a larger error over southern China during the summer. CMORPH poorly reproduces the magnitude of the precipitation over southeastern China and the temporal correlation over western and northwestern China during all seasons. BMEP has a lower RMSE and higher CC than GPCP over eastern and southern China, where the station network is dense. In contrast, BMEP has a lower CC than GPCP over western and northwestern China, where the gauge network is relatively sparse.

  2. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  3. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  4. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  5. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  6. The role of large scale motions on passive scalar transport

    Science.gov (United States)

    Dharmarathne, Suranga; Araya, Guillermo; Tutkun, Murat; Leonardi, Stefano; Castillo, Luciano

    2014-11-01

    We study direct numerical simulation (DNS) of turbulent channel flow at Reτ = 394 to investigate effect of large scale motions on fluctuating temperature field which forms a passive scalar field. Statistical description of the large scale features of the turbulent channel flow is obtained using two-point correlations of velocity components. Two-point correlations of fluctuating temperature field is also examined in order to identify possible similarities between velocity and temperature fields. The two-point cross-correlations betwen the velocity and temperature fluctuations are further analyzed to establish connections between these two fields. In addition, we use proper orhtogonal decompotion (POD) to extract most dominant modes of the fields and discuss the coupling of large scale features of turbulence and the temperature field.

  7. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  8. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  9. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  10. Report of the LASCAR forum: Large scale reprocessing plant safeguards

    International Nuclear Information System (INIS)

    1992-01-01

    This report has been prepared to provide information on the studies which were carried out from 1988 to 1992 under the auspices of the multinational forum known as Large Scale Reprocessing Plant Safeguards (LASCAR) on safeguards for four large scale reprocessing plants operated or planned to be operated in the 1990s. The report summarizes all of the essential results of these studies. The participants in LASCAR were from France, Germany, Japan, the United Kingdom, the United States of America, the Commission of the European Communities - Euratom, and the International Atomic Energy Agency

  11. Large-scale structure observables in general relativity

    International Nuclear Information System (INIS)

    Jeong, Donghui; Schmidt, Fabian

    2015-01-01

    We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)

  12. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  13. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  14. Generation of large-scale vortives in compressible helical turbulence

    International Nuclear Information System (INIS)

    Chkhetiani, O.G.; Gvaramadze, V.V.

    1989-01-01

    We consider generation of large-scale vortices in compressible self-gravitating turbulent medium. The closed equation describing evolution of the large-scale vortices in helical turbulence with finite correlation time is obtained. This equation has the form similar to the hydromagnetic dynamo equation, which allows us to call the vortx genertation effect the vortex dynamo. It is possible that principally the same mechanism is responsible both for amplification and maintenance of density waves and magnetic fields in gaseous disks of spiral galaxies. (author). 29 refs

  15. Large-scale CO2 storage — Is it feasible?

    Directory of Open Access Journals (Sweden)

    Johansen H.

    2013-06-01

    Full Text Available CCS is generally estimated to have to account for about 20% of the reduction of CO2 emissions to the atmosphere. This paper focuses on the technical aspects of CO2 storage, even if the CCS challenge is equally dependent upon finding viable international solutions to a wide range of economic, political and cultural issues. It has already been demonstrated that it is technically possible to store adequate amounts of CO2 in the subsurface (Sleipner, InSalah, Snøhvit. The large-scale storage challenge (several Gigatons of CO2 per year is more an issue of minimizing cost without compromising safety, and of making international regulations.The storage challenge may be split into 4 main parts: 1 finding reservoirs with adequate storage capacity, 2 make sure that the sealing capacity above the reservoir is sufficient, 3 build the infrastructure for transport, drilling and injection, and 4 set up and perform the necessary monitoring activities. More than 150 years of worldwide experience from the production of oil and gas is an important source of competence for CO2 storage. The storage challenge is however different in three important aspects: 1 the storage activity results in pressure increase in the subsurface, 2 there is no production of fluids that give important feedback on reservoir performance, and 3 the monitoring requirement will have to extend for a much longer time into the future than what is needed during oil and gas production. An important property of CO2 is that its behaviour in the subsurface is significantly different from that of oil and gas. CO2 in contact with water is reactive and corrosive, and may impose great damage on both man-made and natural materials, if proper precautions are not executed. On the other hand, the long-term effect of most of these reactions is that a large amount of CO2 will become immobilized and permanently stored as solid carbonate minerals. The reduced opportunity for direct monitoring of fluid samples

  16. Large-scale CO2 storage — Is it feasible?

    Science.gov (United States)

    Johansen, H.

    2013-06-01

    CCS is generally estimated to have to account for about 20% of the reduction of CO2 emissions to the atmosphere. This paper focuses on the technical aspects of CO2 storage, even if the CCS challenge is equally dependent upon finding viable international solutions to a wide range of economic, political and cultural issues. It has already been demonstrated that it is technically possible to store adequate amounts of CO2 in the subsurface (Sleipner, InSalah, Snøhvit). The large-scale storage challenge (several Gigatons of CO2 per year) is more an issue of minimizing cost without compromising safety, and of making international regulations.The storage challenge may be split into 4 main parts: 1) finding reservoirs with adequate storage capacity, 2) make sure that the sealing capacity above the reservoir is sufficient, 3) build the infrastructure for transport, drilling and injection, and 4) set up and perform the necessary monitoring activities. More than 150 years of worldwide experience from the production of oil and gas is an important source of competence for CO2 storage. The storage challenge is however different in three important aspects: 1) the storage activity results in pressure increase in the subsurface, 2) there is no production of fluids that give important feedback on reservoir performance, and 3) the monitoring requirement will have to extend for a much longer time into the future than what is needed during oil and gas production. An important property of CO2 is that its behaviour in the subsurface is significantly different from that of oil and gas. CO2 in contact with water is reactive and corrosive, and may impose great damage on both man-made and natural materials, if proper precautions are not executed. On the other hand, the long-term effect of most of these reactions is that a large amount of CO2 will become immobilized and permanently stored as solid carbonate minerals. The reduced opportunity for direct monitoring of fluid samples close to the

  17. Spatial estimation of mean temperature and precipitation in areas of scarce meteorological information

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, J.D. [Universidad Autonoma Chapingo, Chapingo (Mexico)]. E-mail: dgomez@correo.chapingo.mx; Etchevers, J.D. [Instituto de Recursos Naturales, Colegio de Postgraduados, Montecillo, Edo. de Mexico (Mexico); Monterroso, A.I. [departamento de Suelos, Universidad Autonoma Chapingo, Chapingo (Mexico); Gay, G. [Centro de Ciencias de la Atmosfera, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Campo, J. [Instituto de Ecologia, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Martinez, M. [Instituto de Recursos Naturales, Montecillo, Edo. de Mexico (Mexico)

    2008-01-15

    In regions of complex relief and scarce meteorological information it becomes difficult to implement techniques and models of numerical interpolation to elaborate reliable maps of climatic variables essential for the study of natural resources using the new tools of the geographic information systems. This paper presents a method for estimating annual and monthly mean values of temperature and precipitation, taking elements from simple interpolation methods and complementing them with some characteristics of more sophisticated methods. To determine temperature, simple linear regression equations were generated associating temperature with altitude of weather stations in the study region, which had been previously subdivided in accordance with humidity conditions and then applying such equations to the area's digital elevation model to obtain temperatures. The estimation of precipitation was based on the graphic method through the analysis of the meteorological systems that affect the regions of the study area throughout the year and considering the influence of mountain ridges on the movement of prevailing winds. Weather stations with data in nearby regions were analyzed according to their position in the landscape, exposure to humid winds, and false color associated with vegetation types. Weather station sites were used to reference the amount of rainfall; interpolation was attained using analogies with satellite images of false color to which a model of digital elevation was incorporated to find similar conditions within the study area. [Spanish] En las regiones de relieve complejo y con escasa informacion meteorologica se dificulta la aplicacion de las diferentes tecnicas y modelos de interpolacion numericos para elaborar mapas de variables climaticas confiables, indispensables para realizar estudios de los recursos naturales, con la utilizacion de las nuevas herramientas de los sistemas de informacion geografica. En este trabajo se presenta un metodo para

  18. Estimating drizzle drop size and precipitation rate using two-colour lidar measurements

    Directory of Open Access Journals (Sweden)

    C. D. Westbrook

    2010-06-01

    Full Text Available A method to estimate the size and liquid water content of drizzle drops using lidar measurements at two wavelengths is described. The method exploits the differential absorption of infrared light by liquid water at 905 nm and 1.5 μm, which leads to a different backscatter cross section for water drops larger than ≈50 μm. The ratio of backscatter measured from drizzle samples below cloud base at these two wavelengths (the colour ratio provides a measure of the median volume drop diameter D0. This is a strong effect: for D0=200 μm, a colour ratio of ≈6 dB is predicted. Once D0 is known, the measured backscatter at 905 nm can be used to calculate the liquid water content (LWC and other moments of the drizzle drop distribution.

    The method is applied to observations of drizzle falling from stratocumulus and stratus clouds. High resolution (32 s, 36 m profiles of D0, LWC and precipitation rate R are derived. The main sources of error in the technique are the need to assume a value for the dispersion parameter μ in the drop size spectrum (leading to at most a 35% error in R and the influence of aerosol returns on the retrieval (≈10% error in R for the cases considered here. Radar reflectivities are also computed from the lidar data, and compared to independent measurements from a colocated cloud radar, offering independent validation of the derived drop size distributions.

  19. Large Scale Water Vapor Sources Relative to the October 2000 Piedmont Flood

    Science.gov (United States)

    Turato, Barbara; Reale, Oreste; Siccardi, Franco

    2003-01-01

    Very intense mesoscale or synoptic-scale rainfall events can occasionally be observed in the Mediterranean region without any deep cyclone developing over the areas affected by precipitation. In these perplexing cases the synoptic situation can superficially look similar to cases in which very little precipitation occurs. These situations could possibly baffle the operational weather forecasters. In this article, the major precipitation event that affected Piedmont (Italy) between 13 and 16 October 2000 is investigated. This is one of the cases in which no intense cyclone was observed within the Mediterranean region at any time, only a moderate system was present, and yet exceptional rainfall and flooding occurred. The emphasis of this study is on the moisture origin and transport. Moisture and energy balances are computed on different space- and time-scales, revealing that precipitation exceeds evaporation over an area inclusive of Piedmont and the northwestern Mediterranean region, on a time-scale encompassing the event and about two weeks preceding it. This is suggestive of an important moisture contribution originating from outside the region. A synoptic and dynamic analysis is then performed to outline the potential mechanisms that could have contributed to the large-scale moisture transport. The central part of the work uses a quasi-isentropic water-vapor back trajectory technique. The moisture sources obtained by this technique are compared with the results of the balances and with the synoptic situation, to unveil possible dynamic mechanisms and physical processes involved. It is found that moisture sources on a variety of atmospheric scales contribute to this event. First, an important contribution is caused by the extratropical remnants of former tropical storm Leslie. The large-scale environment related to this system allows a significant amount of moisture to be carried towards Europe. This happens on a time- scale of about 5-15 days preceding the

  20. Large-scale Lurgi plant would be uneconomic: study group

    Energy Technology Data Exchange (ETDEWEB)

    1964-03-21

    Gas Council and National Coal Board agreed that building of large scale Lurgi plant on the basis of study is not at present acceptable on economic grounds. The committee considered that new processes based on naphtha offered more economic sources of base and peak load production. Tables listing data provided in contractors' design studies and summary of contractors' process designs are included.

  1. Origin of large-scale cell structure in the universe

    International Nuclear Information System (INIS)

    Zel'dovich, Y.B.

    1982-01-01

    A qualitative explanation is offered for the characteristic global structure of the universe, wherein ''black'' regions devoid of galaxies are surrounded on all sides by closed, comparatively thin, ''bright'' layers populated by galaxies. The interpretation rests on some very general arguments regarding the growth of large-scale perturbations in a cold gas

  2. Large-Scale Systems Control Design via LMI Optimization

    Czech Academy of Sciences Publication Activity Database

    Rehák, Branislav

    2015-01-01

    Roč. 44, č. 3 (2015), s. 247-253 ISSN 1392-124X Institutional support: RVO:67985556 Keywords : Combinatorial linear matrix inequalities * large-scale system * decentralized control Subject RIV: BC - Control Systems Theory Impact factor: 0.633, year: 2015

  3. Identification of low order models for large scale processes

    NARCIS (Netherlands)

    Wattamwar, S.K.

    2010-01-01

    Many industrial chemical processes are complex, multi-phase and large scale in nature. These processes are characterized by various nonlinear physiochemical effects and fluid flows. Such processes often show coexistence of fast and slow dynamics during their time evolutions. The increasing demand

  4. Worldwide large-scale fluctuations of sardine and anchovy ...

    African Journals Online (AJOL)

    Worldwide large-scale fluctuations of sardine and anchovy populations. ... African Journal of Marine Science. Journal Home · ABOUT THIS JOURNAL · Advanced ... Fullscreen Fullscreen Off. http://dx.doi.org/10.2989/AJMS.2008.30.1.13.463.

  5. Worldwide large-scale fluctuations of sardine and anchovy ...

    African Journals Online (AJOL)

    Worldwide large-scale fluctuations of sardine and anchovy populations. ... African Journal of Marine Science. Journal Home · ABOUT THIS JOURNAL · Advanced ... http://dx.doi.org/10.2989/AJMS.2008.30.1.13.463 · AJOL African Journals ...

  6. Large-scale coastal impact induced by a catastrophic storm

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Johannessen, Peter N

    breaching. Our results demonstrate that violent, millennial-scale storms can trigger significant large-scale and long-term changes on barrier coasts, and that coastal changes assumed to take place over centuries or even millennia may occur in association with a single extreme storm event....

  7. Success Factors of Large Scale ERP Implementation in Thailand

    OpenAIRE

    Rotchanakitumnuai; Siriluck

    2010-01-01

    The objectives of the study are to examine the determinants of ERP implementation success factors of ERP implementation. The result indicates that large scale ERP implementation success consist of eight factors: project management competence, knowledge sharing, ERP system quality , understanding, user involvement, business process re-engineering, top management support, organization readiness.

  8. Planck intermediate results XLII. Large-scale Galactic magnetic fields

    DEFF Research Database (Denmark)

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.

    2016-01-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured ...

  9. Large Scale Simulations of the Euler Equations on GPU Clusters

    KAUST Repository

    Liebmann, Manfred; Douglas, Craig C.; Haase, Gundolf; Horvá th, Zoltá n

    2010-01-01

    The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one

  10. Breakdown of large-scale circulation in turbulent rotating convection

    NARCIS (Netherlands)

    Kunnen, R.P.J.; Clercx, H.J.H.; Geurts, Bernardus J.

    2008-01-01

    Turbulent rotating convection in a cylinder is investigated both numerically and experimentally at Rayleigh number Ra = $10^9$ and Prandtl number $\\sigma$ = 6.4. In this Letter we discuss two topics: the breakdown under rotation of the domain-filling large-scale circulation (LSC) typical for

  11. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  12. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-01-01

    structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination

  13. Temporal Variation of Large Scale Flows in the Solar Interior ...

    Indian Academy of Sciences (India)

    tribpo

    Temporal Variation of Large Scale Flows in the Solar Interior. 355. Figure 2. Zonal and meridional components of the time-dependent residual velocity at a few selected depths as marked above each panel, are plotted as contours of constant velocity in the longitude-latitude plane. The left panels show the zonal component, ...

  14. Facile Large-Scale Synthesis of 5- and 6-Carboxyfluoresceins

    DEFF Research Database (Denmark)

    Hammershøj, Peter; Ek, Pramod Kumar; Harris, Pernille

    2015-01-01

    A series of fluorescein dyes have been prepared from a common precursor through a very simple synthetic procedure, giving access to important precursors for fluorescent probes. The method has proven an efficient access to regioisomerically pure 5- and 6-carboxyfluoresceins on a large scale, in good...

  15. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  16. Newton Methods for Large Scale Problems in Machine Learning

    Science.gov (United States)

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  17. Large-Scale Machine Learning for Classification and Search

    Science.gov (United States)

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  18. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  19. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  20. Chirping for large-scale maritime archaeological survey

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2014-01-01

    Archaeological wrecks exposed on the sea floor are mapped using side-scan and multibeam techniques, whereas the detection of submerged archaeological sites, such as Stone Age settlements, and wrecks, partially or wholly embedded in sea-floor sediments, requires the application of high-resolution ...... the present state of this technology, it appears well suited to large-scale maritime archaeological mapping....

  1. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  2. Large Scale Anomalies of the Cosmic Microwave Background with Planck

    DEFF Research Database (Denmark)

    Frejsel, Anne Mette

    This thesis focuses on the large scale anomalies of the Cosmic Microwave Background (CMB) and their possible origins. The investigations consist of two main parts. The first part is on statistical tests of the CMB, and the consistency of both maps and power spectrum. We find that the Planck data...

  3. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 4. Fractals and the Large-Scale Structure in the Universe - Is the Cosmological Principle Valid? A K Mittal T R Seshadri. General Article Volume 7 Issue 4 April 2002 pp 39-47 ...

  4. LARGE-SCALE COMMERCIAL INVESTMENTS IN LAND: SEEKING ...

    African Journals Online (AJOL)

    extent of large-scale investment in land or to assess its impact on the people in recipient countries. .... favorable lease terms, apparently based on a belief that this is necessary to .... Harm to the rights of local occupiers of land can result from a dearth. 24. ..... applies to a self-identified group based on the group's traditions.

  5. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    Science.gov (United States)

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  6. Reconsidering Replication: New Perspectives on Large-Scale School Improvement

    Science.gov (United States)

    Peurach, Donald J.; Glazer, Joshua L.

    2012-01-01

    The purpose of this analysis is to reconsider organizational replication as a strategy for large-scale school improvement: a strategy that features a "hub" organization collaborating with "outlet" schools to enact school-wide designs for improvement. To do so, we synthesize a leading line of research on commercial replication to construct a…

  7. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed; Elsawy, Hesham; Gharbieh, Mohammad; Alouini, Mohamed-Slim; Adinoyi, Abdulkareem; Alshaalan, Furaih

    2017-01-01

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end

  8. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  9. Technologies and challenges in large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Engholm-Keller, Kasper; Larsen, Martin Røssel

    2013-01-01

    become the main technique for discovery and characterization of phosphoproteins in a nonhypothesis driven fashion. In this review, we describe methods for state-of-the-art MS-based analysis of protein phosphorylation as well as the strategies employed in large-scale phosphoproteomic experiments...... with focus on the various challenges and limitations this field currently faces....

  10. Solving Large Scale Crew Scheduling Problems in Practice

    NARCIS (Netherlands)

    E.J.W. Abbink (Erwin); L. Albino; T.A.B. Dollevoet (Twan); D. Huisman (Dennis); J. Roussado; R.L. Saldanha

    2010-01-01

    textabstractThis paper deals with large-scale crew scheduling problems arising at the Dutch railway operator, Netherlands Railways (NS). NS operates about 30,000 trains a week. All these trains need a driver and a certain number of guards. Some labor rules restrict the duties of a certain crew base

  11. The large scale microwave background anisotropy in decaying particle cosmology

    International Nuclear Information System (INIS)

    Panek, M.

    1987-06-01

    We investigate the large-scale anisotropy of the microwave background radiation in cosmological models with decaying particles. The observed value of the quadrupole moment combined with other constraints gives an upper limit on the redshift of the decay z/sub d/ < 3-5. 12 refs., 2 figs

  12. Dual Decomposition for Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Vandenberghe, Lieven

    2013-01-01

    Dual decomposition is applied to power balancing of exible thermal storage units. The centralized large-scale problem is decomposed into smaller subproblems and solved locallyby each unit in the Smart Grid. Convergence is achieved by coordinating the units consumption through a negotiation...

  13. Evaluation of Large-scale Public Sector Reforms

    DEFF Research Database (Denmark)

    Breidahl, Karen Nielsen; Gjelstrup, Gunnar; Hansen, Hanne Foss

    2017-01-01

    and more delimited policy areas take place. In our analysis we apply four governance perspectives (rational-instrumental, rational-interest based, institutional-cultural and a chaos perspective) in a comparative analysis of the evaluations of two large-scale public sector reforms in Denmark and Norway. We...

  14. Assessment of climate change impacts on rainfall using large scale ...

    Indian Academy of Sciences (India)

    Many of the applied techniques in water resources management can be directly or indirectly influenced by ... is based on large scale climate signals data around the world. In order ... predictand relationships are often very complex. .... constraints to solve the optimization problem. ..... social, and environmental sustainability.

  15. Factors Influencing Uptake of a Large Scale Curriculum Innovation.

    Science.gov (United States)

    Adey, Philip S.

    Educational research has all too often failed to be implemented on a large-scale basis. This paper describes the multiplier effect of a professional development program for teachers and for trainers in the United Kingdom, and how that program was developed, monitored, and evaluated. Cognitive Acceleration through Science Education (CASE) is a…

  16. ability in Large Scale Land Acquisitions in Kenya

    International Development Research Centre (IDRC) Digital Library (Canada)

    Corey Piccioni

    Kenya's national planning strategy, Vision 2030. Agri- culture, natural resource exploitation, and infrastruc- ... sitions due to high levels of poverty and unclear or in- secure land tenure rights in Kenya. Inadequate social ... lease to a private company over the expansive Yala. Swamp to undertake large-scale irrigation farming.

  17. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  18. Large-scale silviculture experiments of western Oregon and Washington.

    Science.gov (United States)

    Nathan J. Poage; Paul D. Anderson

    2007-01-01

    We review 12 large-scale silviculture experiments (LSSEs) in western Washington and Oregon with which the Pacific Northwest Research Station of the USDA Forest Service is substantially involved. We compiled and arrayed information about the LSSEs as a series of matrices in a relational database, which is included on the compact disc published with this report and...

  19. Participatory Design and the Challenges of Large-Scale Systems

    DEFF Research Database (Denmark)

    Simonsen, Jesper; Hertzum, Morten

    2008-01-01

    With its 10th biannual anniversary conference, Participatory Design (PD) is leaving its teens and must now be considered ready to join the adult world. In this article we encourage the PD community to think big: PD should engage in large-scale information-systems development and opt for a PD...

  20. Large-scale assessment of soil erosion in Africa: satellites help to jointly account for dynamic rainfall and vegetation cover

    Science.gov (United States)

    Vrieling, Anton; Hoedjes, Joost C. B.; van der Velde, Marijn

    2015-04-01

    Efforts to map and monitor soil erosion need to account for the erratic nature of the soil erosion process. Soil erosion by water occurs on sloped terrain when erosive rainfall and consequent surface runoff impact soils that are not well-protected by vegetation or other soil protective measures. Both rainfall erosivity and vegetation cover are highly variable through space and time. Due to data paucity and the relative ease of spatially overlaying geographical data layers into existing models like USLE (Universal Soil Loss Equation), many studies and mapping efforts merely use average annual values for erosivity and vegetation cover as input. We first show that rainfall erosivity can be estimated from satellite precipitation data. We obtained average annual erosivity estimates from 15 yr of 3-hourly TRMM Multi-satellite Precipitation Analysis (TMPA) data (1998-2012) using intensity-erosivity relationships. Our estimates showed a positive correlation (r = 0.84) with long-term annual erosivity values of 37 stations obtained from literature. Using these TMPA erosivity retrievals, we demonstrate the large interannual variability, with maximum annual erosivity often exceeding two to three times the mean value, especially in semi-arid areas. We then calculate erosivity at a 10-daily time-step and combine this with vegetation cover development for selected locations in Africa using NDVI - normalized difference vegetation index - time series from SPOT VEGETATION. Although we do not integrate the data at this point, the joint analysis of both variables stresses the need for joint accounting for erosivity and vegetation cover for large-scale erosion assessment and monitoring.

  1. Ranking GCM Estimates of Twentieth Century Precipitation Seasonality in the Western U.S. and its Influence on Floristic Provinces.

    Science.gov (United States)

    Cole, K. L.; Eischeid, J. K.; Garfin, G. M.; Ironside, K.; Cobb, N. S.

    2008-12-01

    Floristic provinces of the western United States (west of 100W) can be segregated into three regions defined by significant seasonal precipitation during the months of: 1) November-March (Mediterranean); 2) July- September (Monsoonal); or, 3) May-June (Rocky Mountain). This third region is best defined by the absence of the late spring-early summer drought that affects regions 1 and 2. Each of these precipitation regimes is characterized by distinct vegetation types and fire seasonality adapted to that particular cycle of seasonal moisture availability and deficit. Further, areas where these regions blend from one to another can support even more complex seasonal patterns and resulting distinctive vegetation types. As a result, modeling the effects of climates on these ecosystems requires confidence that GCMs can at least approximate these sub- continental seasonal precipitation patterns. We evaluated the late Twentieth Century (1950-1999 AD) estimates of annual precipitation seasonality produced by 22 GCMs contained within the IPCC Fourth Assessment (AR4). These modeled estimates were compared to values from the PRISM dataset, extrapolated from station data, over the same historical period for the 3 seasonal periods defined above. The correlations between GCM estimates and PRISM values were ranked using 4 measures: 1) A map pattern relationship based on the correlation coefficient, 2) A map pattern relationship based on the congruence coefficient, 3) The ratio of simulated/observed area averaged precipitation based on the seasonal precipitation amounts, and, 4) The ratio of simulated/observed area averaged precipitation based on the seasonal precipitation percentages of the annual total. For each of the four metrics, the rank order of models was very similar. The ranked order of the performance of the different models quantified aspects of the model performance visible in the mapped results. While some models represented the seasonal patterns very well, others

  2. A case study on large-scale dynamical influence on bright band using cloud radar during the Indian summer monsoon

    Science.gov (United States)

    Jha, Ambuj K.; Kalapureddy, M. C. R.; Devisetty, Hari Krishna; Deshpande, Sachin M.; Pandithurai, G.

    2018-02-01

    The present study is a first of its kind attempt in exploring the physical features (e.g., height, width, intensity, duration) of tropical Indian bright band using a Ka-band cloud radar under the influence of large-scale cyclonic circulation and attempts to explain the abrupt changes in bright band features, viz., rise in the bright band height by 430 m and deepening of the bright band by about 300 m observed at around 14:00 UTC on Sep 14, 2016, synoptically as well as locally. The study extends the utility of cloud radar to understand how the bright band features are associated with light precipitation, ranging from 0 to 1.5 mm/h. Our analysis of the precipitation event of Sep 14-15, 2016 shows that the bright band above (below) 3.7 km, thickness less (more) than 300 m can potentially lead to light drizzle of 0-0.25 mm/h (drizzle/light rain) at the surface. It is also seen that the cloud radar may be suitable for bright band study within light drizzle limits than under higher rain conditions. Further, the study illustrates that the bright band features can be determined using the polarimetric capability of the cloud radar. It is shown that an LDR value of - 22 dB can be associated with the top height of bright band in the Ka-band observations which is useful in the extraction of the bright band top height and its width. This study is useful for understanding the bright band phenomenon and could be potentially useful in establishing the bright band-surface rain relationship through the perspective of a cloud radar, which would be helpful to enhance the cloud radar-based quantitative estimates of precipitation.

  3. Large scale laboratory diffusion experiments in clay rocks

    International Nuclear Information System (INIS)

    Garcia-Gutierrez, M.; Missana, T.; Mingarro, M.; Martin, P.L.; Cormenzana, J.L.

    2005-01-01

    Full text of publication follows: Clay formations are potential host rocks for high-level radioactive waste repositories. In clay materials the radionuclide diffusion is the main transport mechanism. Thus, the understanding of the diffusion processes and the determination of diffusion parameters in conditions as similar as possible to the real ones, are critical for the performance assessment of deep geological repository. Diffusion coefficients are mainly measured in the laboratory using small samples, after a preparation to fit into the diffusion cell. In addition, a few field tests are usually performed for confirming laboratory results, and analyse scale effects. In field or 'in situ' tests the experimental set-up usually includes the injection of a tracer diluted in reconstituted formation water into a packed off section of a borehole. Both experimental systems may produce artefacts in the determination of diffusion coefficients. In laboratory the preparation of the sample can generate structural change mainly if the consolidated clay have a layered fabric, and in field test the introduction of water could modify the properties of the saturated clay in the first few centimeters, just where radionuclide diffusion is expected to take place. In this work, a large scale laboratory diffusion experiment is proposed, using a large cylindrical sample of consolidated clay that can overcome the above mentioned problems. The tracers used were mixed with clay obtained by drilling a central hole, re-compacted into the hole at approximately the same density as the consolidated block and finally sealed. Neither additional treatment of the sample nor external monitoring are needed. After the experimental time needed for diffusion to take place (estimated by scoping calculations) the block was sampled to obtain a 3D distribution of the tracer concentration and the results were modelled. An additional advantage of the proposed configuration is that it could be used in 'in situ

  4. Disdrometer-based C-Band Radar Quantitative Precipitation Estimation (QPE) in a highly complex terrain region in tropical Colombia.

    Science.gov (United States)

    Sepúlveda, J.; Hoyos Ortiz, C. D.

    2017-12-01

    An adequate quantification of precipitation over land is critical for many societal applications including agriculture, hydroelectricity generation, water supply, and risk management associated with extreme events. The use of rain gauges, a traditional method for precipitation estimation, and an excellent one, to estimate the volume of liquid water during a particular precipitation event, does not allow to fully capture the highly spatial variability of the phenomena which is a requirement for almost all practical applications. On the other hand, the weather radar, an active remote sensing sensor, provides a proxy for rainfall with fine spatial resolution and adequate temporary sampling, however, it does not measure surface precipitation. In order to fully exploit the capabilities of the weather radar, it is necessary to develop quantitative precipitation estimation (QPE) techniques combining radar information with in-situ measurements. Different QPE methodologies are explored and adapted to local observations in a highly complex terrain region in tropical Colombia using a C-Band radar and a relatively dense network of rain gauges and disdrometers. One important result is that the expressions reported in the literature for extratropical locations are not representative of the conditions found in the tropical region studied. In addition to reproducing the state-of-the-art techniques, a new multi-stage methodology based on radar-derived variables and disdrometer data is proposed in order to achieve the best QPE possible. The main motivation for this new methodology is based on the fact that most traditional QPE methods do not directly take into account the different uncertainty sources involved in the process. The main advantage of the multi-stage model compared to traditional models is that it allows assessing and quantifying the uncertainty in the surface rain rate estimation. The sub-hourly rainfall estimations using the multi-stage methodology are realistic

  5. Understanding dynamics of large-scale atmospheric vortices with moist-convective shallow water model

    International Nuclear Information System (INIS)

    Rostami, M.; Zeitlin, V.

    2016-01-01

    Atmospheric jets and vortices which, together with inertia-gravity waves, constitute the principal dynamical entities of large-scale atmospheric motions, are well described in the framework of one- or multi-layer rotating shallow water models, which are obtained by vertically averaging of full “primitive” equations. There is a simple and physically consistent way to include moist convection in these models by adding a relaxational parameterization of precipitation and coupling precipitation with convective fluxes with the help of moist enthalpy conservation. We recall the construction of moist-convective rotating shallow water model (mcRSW) model and give an example of application to upper-layer atmospheric vortices. (paper)

  6. Large-scale fluctuations in the diffusive decomposition of solid solutions

    International Nuclear Information System (INIS)

    Karpov, V.G.; Grimsditch, M.

    1995-01-01

    The concept of an instability in the classic Ostwald ripening theory with respect to compositional fluctuations is suggested. We show that small statistical fluctuations in the precipitate phase lead to gigantic Coulomb-like fluctuations in the solute concentration which in turn affect the ripening. As a result large-scale fluctuations in both the precipitate and solute concentrations appear. These fluctuations are characterized by amplitudes of the order of the average values of the corresponding quantities and by a space scale L∼(na) -1/2 which is considerably greater than both the average nuclear radius and internuclear distance. The Lifshitz-Slyozov theory of ripening is shown to remain locally applicable, over length scales much less than L. The implications of these findings for elastic light scattering in solid solutions that have undergone Ostwald ripening are considered

  7. Large-scale fluctuations in the diffusive decomposition of solid solutions

    Science.gov (United States)

    Karpov, V. G.; Grimsditch, M.

    1995-04-01

    The concept of an instability in the classic Ostwald ripening theory with respect to compositional fluctuations is suggested. We show that small statistical fluctuations in the precipitate phase lead to gigantic Coulomb-like fluctuations in the solute concentration which in turn affect the ripening. As a result large-scale fluctuations in both the precipitate and solute concentrations appear. These fluctuations are characterized by amplitudes of the order of the average values of the corresponding quantities and by a space scale L~(na)-1/2 which is considerably greater than both the average nuclear radius and internuclear distance. The Lifshitz-Slyozov theory of ripening is shown to remain locally applicable, over length scales much less than L. The implications of these findings for elastic light scattering in solid solutions that have undergone Ostwald ripening are considered.

  8. Large-scale perturbations from the waterfall field in hybrid inflation

    International Nuclear Information System (INIS)

    Fonseca, José; Wands, David; Sasaki, Misao

    2010-01-01

    We estimate large-scale curvature perturbations from isocurvature fluctuations in the waterfall field during hybrid inflation, in addition to the usual inflaton field perturbations. The tachyonic instability at the end of inflation leads to an explosive growth of super-Hubble scale perturbations, but they retain the steep blue spectrum characteristic of vacuum fluctuations in a massive field during inflation. The power spectrum thus peaks around the Hubble-horizon scale at the end of inflation. We extend the usual δN formalism to include the essential role of these small fluctuations when estimating the large-scale curvature perturbation. The resulting curvature perturbation due to fluctuations in the waterfall field is second-order and the spectrum is expected to be of order 10 −54 on cosmological scales

  9. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  10. Precipitation evidences on X-Band Synthetic Aperture Radar imagery: an approach for quantitative detection and estimation

    Science.gov (United States)

    Mori, Saverio; Marzano, Frank S.; Montopoli, Mario; Pulvirenti, Luca; Pierdicca, Nazzareno

    2017-04-01

    al. 2014 and Mori et al. 2012); ancillary data, such as local incident angle and land cover, are used. This stage is necessary to tune the precipitation map stage and to avoid severe misinterpretations on the precipitation map routines. The second stage consist of estimating the local cloud attenuation. Finally the precipitation map is estimated, using the the retrieval algorithm developed by Marzano et al. (2011), applied only to pixels where rain is known to be present. Within the FP7 project EartH2Observe we have applied this methodology to 14 study cases, acquired within TSX and CSK missions over Italy and United States. This choice allows analysing both hurricane-like intense events and continental mid-latitude precipitations, with the possibility to verify and validate the proposed methodology through the available weather radar networks. Moreover it allows in same extent analysing the contribution of orography and quality of ancillary data (i.e. landcover). In this work we will discuss the results obtained until now in terms of improved rain cell localization and precipitation quantification.

  11. Systematical estimation of GPM-based global satellite mapping of precipitation products over China

    Science.gov (United States)

    Zhao, Haigen; Yang, Bogang; Yang, Shengtian; Huang, Yingchun; Dong, Guotao; Bai, Juan; Wang, Zhiwei

    2018-03-01

    As the Global Precipitation Measurement (GPM) Core Observatory satellite continues its mission, new version 6 products for Global Satellite Mapping of Precipitation (GSMaP) have been released. However, few studies have systematically evaluated the GSMaP products over mainland China. This study quantitatively evaluated three GPM-based GSMaP version 6 precipitation products for China and eight subregions referring to the Chinese daily Precipitation Analysis Product (CPAP). The GSMaP products included near-real-time (GSMaP_NRT), microwave-infrared reanalyzed (GSMaP_MVK), and gauge-adjusted (GSMaP_Gau) data. Additionally, the gauge-adjusted Integrated Multi-Satellite Retrievals for Global Precipitation Measurement Mission (IMERG_Gau) was also assessed and compared with GSMaP_Gau. The analyses of the selected daily products were carried out at spatiotemporal resolutions of 1/4° for the period of March 2014 to December 2015 in consideration of the resolution of CPAP and the consistency of the coverage periods of the satellite products. The results indicated that GSMaP_MVK and GSMaP_NRT performed comparably and underdetected light rainfall events (Pearson linear correlation coefficient (CC), fractional standard error (FSE), and root-mean-square error (RMSE) metrics during the summer. Compared with GSMaP_NRT and GSMaP_MVK, GSMaP_Gau possessed significantly improved metrics over mainland China and the eight subregions and performed better in terms of CC, RMSE, and FSE but underestimated precipitation to a greater degree than IMERG_Gau. As a quantitative assessment of the GPM-era GSMaP products, these validation results will supply helpful references for both end users and algorithm developers. However, the study findings need to be confirmed over a longer future study period when the longer-period IMERG retrospectively-processed data are available.

  12. Does GPM-based multi-satellite precipitation enhance rainfall estimates over Pakistan and Bolivia arid regions?

    Science.gov (United States)

    Hussain, Y.; Satgé, F.; Bonnet, M. P.; Pillco, R.; Molina, J.; Timouk, F.; Roig, H.; Martinez-Carvajal, H., Sr.; Gulraiz, A.

    2016-12-01

    Arid regions are sensitive to rainfall variations which are expressed in the form of flooding and droughts. Unfortunately, those regions are poorly monitored and high quality rainfall estimates are still needed. The Global Precipitation Measurement (GPM) mission released two new satellite rainfall products named Integrated Multisatellite Retrievals GPM (IMERG) and Global Satellite Mapping of Precipitation version 6 (GSMaP-v6) bringing the possibility of accurate rainfall monitoring over these countries. This study assessed both products at monthly scale over Pakistan considering dry and wet season over the 4 main climatic zones from 2014 to 2016. With similar climatic conditions, the Altiplano region of Bolivia is considered to quantify the influence of big lakes (Titicaca and Poopó) in rainfall estimates. For comparison, the widely used TRMM-Multisatellite Precipitation Analysis 3B43 (TMPA-3B43) version 7 is also involved in the analysis to observe the potential enhancement in rainfall estimate brought by GPM products. Rainfall estimates derived from 110 rain-gauges are used as reference to compare IMERG, GSMaP-v6 and TMPA-3B43 at the 0.1° and 0.25° spatial resolution. Over both regions, IMERG and GSMaP-v6 capture the spatial pattern of precipitation as well as TMPA-3B43. All products tend to over estimates rainfall over very arid regions. This feature is even more marked during dry season. However, during this season, both reference and estimated rainfall remain very low and do not impact seasonal water budget computation. On a general way, IMERG slightly outperforms TMPA-3B43 and GSMaP-v6 which provides the less accurate rainfall estimate. The TMPA-3B43 rainfall underestimation previously found over Lake Titicaca is still observed in IMERG estimates. However, GSMaP-v6 considerably decreases the underestimation providing the most accurate rainfall estimate over the lake. MOD11C3 Land Surface Temperature (LST) and ASTER Global Emissivity Dataset reveal strong

  13. Large-scale hydraulic structure of a seismogenic fault at 10 km depth (Gole Larghe Fault Zone, Italian Southern Alps)

    Science.gov (United States)

    Bistacchi, Andrea; Di Toro, Giulio; Smith, Steve; Mittempergher, Silvia; Garofalo, Paolo

    2014-05-01

    The definition of hydraulic properties of fault zones is a major issue in structural geology, seismology, and in several applications (hydrocarbons, hydrogeology, CO2 sequestration, etc.). The permeability of fault rocks can be measured in laboratory experiments, but its upscaling to large-scale structures is not straightforward. For instance, typical permeability of fine-grained fault rock samples is in the 10-18-10-20 m2 range, but, according to seismological estimates, the large-scale permeability of active fault zones can be as high as 10-10 m2. Solving this issue is difficult because in-situ measurements of large-scale permeability have been carried out just at relatively shallow depths - mainly in oil wells and exceptionally in active tectonic settings (e.g. SAFOD at 3 km), whilst deeper experiments have been performed only in the stable continental crust (e.g. KTB at 9 km). In this study, we apply discrete fracture-network (DFN) modelling techniques developed for shallow aquifers (mainly in nuclear waste storage projects like Yucca Mountain) and in the oil industry, in order to model the hydraulic structure of the Gole Larghe Fault Zone (GLFZ, Italian Southern Alps). This fault, now exposed in world-class glacier-polished outcrops, has been exhumed from ca. 8 km, where it was characterized by a well-documented seismic activity, but also by hydrous fluid flow evidenced by alteration halos and precipitation of hydrothermal minerals in veins and along cataclasites. The GLFZ does not show a classical seal structure that in other fault zones corresponds to a core zone characterized by fine-grained fault rocks. However, permeability is heterogeneous and the permeability tensor is strongly anisotropic due to fracture preferential orientation. We will show with numerical experiments that this hydraulic structure results in a channelized fluid flow (which is consistent with the observed hydrothermal alteration pattern). This results in a counterintuitive situation

  14. Integrating weather and geotechnical monitoring data for assessing the stability of large scale surface mining operations

    Science.gov (United States)

    Steiakakis, Chrysanthos; Agioutantis, Zacharias; Apostolou, Evangelia; Papavgeri, Georgia; Tripolitsiotis, Achilles

    2016-01-01

    The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions. This paper presents an integrated data management system which was developed over a number of years as well as the advantages through a specific application. The presented case study illustrates how the high production slopes of a mine that exceed depths of 100-120 m were successfully mined with an average displacement rate of 10- 20 mm/day, approaching an almost slow to moderate landslide velocity. Monitoring data of the past four years are included in the database and can be analyzed to produce valuable results. Time-series data correlations of movements, precipitation records, etc. are evaluated and presented in this case study. The results can be used to successfully manage mine operations and ensure the safety of the mine and the workforce.

  15. Transfrontier consequences to the population of Greece of large scale nuclear accidents: a preliminary assessment

    International Nuclear Information System (INIS)

    Kollas, J.G.; Catsaros, Nicolas.

    1985-06-01

    In this report the consequences to the population of Greece from hypothetical large scale nuclear accidents at the Kozlodui (Bulgaria) nuclear power station are estimated under some simplifying assumptions. Three different hypothetical accident scenarios - the most serious for pressurized water reactors - are examined. The analysis is performed by the current Greek version of code CRAC2 and includes health and economic consequences to the population of Greece. (author)

  16. Development of Deep Learning Based Data Fusion Approach for Accurate Rainfall Estimation Using Ground Radar and Satellite Precipitation Products

    Science.gov (United States)

    Chen, H.; Chandra, C. V.; Tan, H.; Cifelli, R.; Xie, P.

    2016-12-01

    Rainfall estimation based on onboard satellite measurements has been an important topic in satellite meteorology for decades. A number of precipitation products at multiple time and space scales have been developed based upon satellite observations. For example, NOAA Climate Prediction Center has developed a morphing technique (i.e., CMORPH) to produce global precipitation products by combining existing space based rainfall estimates. The CMORPH products are essentially derived based on geostationary satellite IR brightness temperature information and retrievals from passive microwave measurements (Joyce et al. 2004). Although the space-based precipitation products provide an excellent tool for regional and global hydrologic and climate studies as well as improved situational awareness for operational forecasts, its accuracy is limited due to the sampling limitations, particularly for extreme events such as very light and/or heavy rain. On the other hand, ground-based radar is more mature science for quantitative precipitation estimation (QPE), especially after the implementation of dual-polarization technique and further enhanced by urban scale radar networks. Therefore, ground radars are often critical for providing local scale rainfall estimation and a "heads-up" for operational forecasters to issue watches and warnings as well as validation of various space measurements and products. The CASA DFW QPE system, which is based on dual-polarization X-band CASA radars and a local S-band WSR-88DP radar, has demonstrated its excellent performance during several years of operation in a variety of precipitation regimes. The real-time CASA DFW QPE products are used extensively for localized hydrometeorological applications such as urban flash flood forecasting. In this paper, a neural network based data fusion mechanism is introduced to improve the satellite-based CMORPH precipitation product by taking into account the ground radar measurements. A deep learning system is

  17. Large-scale innovation and change in UK higher education

    Directory of Open Access Journals (Sweden)

    Stephen Brown

    2013-09-01

    Full Text Available This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ technology to deliver such changes. Key lessons that emerged from these experiences are reviewed covering themes of pervasiveness, unofficial systems, project creep, opposition, pressure to deliver, personnel changes and technology issues. The paper argues that collaborative approaches to project management offer greater prospects of effective large-scale change in universities than either management-driven top-down or more champion-led bottom-up methods. It also argues that while some diminution of control over project outcomes is inherent in this approach, this is outweighed by potential benefits of lasting and widespread adoption of agreed changes.

  18. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    Science.gov (United States)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4Destiny will be used in its third year as a high resolution, wide-field imager to conduct a weak lensing survey covering >lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  19. Volume measurement study for large scale input accountancy tank

    International Nuclear Information System (INIS)

    Uchikoshi, Seiji; Watanabe, Yuichi; Tsujino, Takeshi

    1999-01-01

    Large Scale Tank Calibration (LASTAC) facility, including an experimental tank which has the same volume and structure as the input accountancy tank of Rokkasho Reprocessing Plant (RRP) was constructed in Nuclear Material Control Center of Japan. Demonstration experiments have been carried out to evaluate a precision of solution volume measurement and to establish the procedure of highly accurate pressure measurement for a large scale tank with dip-tube bubbler probe system to be applied to the input accountancy tank of RRP. Solution volume in a tank is determined from substitution the solution level for the calibration function obtained in advance, which express a relation between the solution level and its volume in the tank. Therefore, precise solution volume measurement needs a precise calibration function that is determined carefully. The LASTAC calibration experiments using pure water showed good result in reproducibility. (J.P.N.)

  20. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.