WorldWideScience

Sample records for quantitative precipitation estimates

  1. River Forecasting Center Quantitative Precipitation Estimate Archive

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Radar indicated-rain gage verified and corrected hourly precipitation estimate on a corrected ~4km HRAP grid. This archive contains hourly estimates of precipitation...

  2. Radar-Derived Quantitative Precipitation Estimation Based on Precipitation Classification

    Directory of Open Access Journals (Sweden)

    Lili Yang

    2016-01-01

    Full Text Available A method for improving radar-derived quantitative precipitation estimation is proposed. Tropical vertical profiles of reflectivity (VPRs are first determined from multiple VPRs. Upon identifying a tropical VPR, the event can be further classified as either tropical-stratiform or tropical-convective rainfall by a fuzzy logic (FL algorithm. Based on the precipitation-type fields, the reflectivity values are converted into rainfall rate using a Z-R relationship. In order to evaluate the performance of this rainfall classification scheme, three experiments were conducted using three months of data and two study cases. In Experiment I, the Weather Surveillance Radar-1988 Doppler (WSR-88D default Z-R relationship was applied. In Experiment II, the precipitation regime was separated into convective and stratiform rainfall using the FL algorithm, and corresponding Z-R relationships were used. In Experiment III, the precipitation regime was separated into convective, stratiform, and tropical rainfall, and the corresponding Z-R relationships were applied. The results show that the rainfall rates obtained from all three experiments match closely with the gauge observations, although Experiment II could solve the underestimation, when compared to Experiment I. Experiment III significantly reduced this underestimation and generated the most accurate radar estimates of rain rate among the three experiments.

  3. Real Time River Forecasting Center Quantitative Precipitation Estimate

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Radar indicated-rain gage verified and corrected hourly precipitation estimate on a corrected ~4km HRAP grid. This archive contains hourly estimates of precipitation...

  4. Assimilation of radar quantitative precipitation estimations in the Canadian Precipitation Analysis (CaPA)

    Science.gov (United States)

    Fortin, Vincent; Roy, Guy; Donaldson, Norman; Mahidjiba, Ahmed

    2015-12-01

    The Canadian Precipitation Analysis (CaPA) is a data analysis system used operationally at the Canadian Meteorological Center (CMC) since April 2011 to produce gridded 6-h and 24-h precipitation accumulations in near real-time on a regular grid covering all of North America. The current resolution of the product is 10-km. Due to the low density of the observational network in most of Canada, the system relies on a background field provided by the Regional Deterministic Prediction System (RDPS) of Environment Canada, which is a short-term weather forecasting system for North America. For this reason, the North American configuration of CaPA is known as the Regional Deterministic Precipitation Analysis (RDPA). Early in the development of the CaPA system, weather radar reflectivity was identified as a very promising additional data source for the precipitation analysis, but necessary quality control procedures and bias-correction algorithms were lacking for the radar data. After three years of development and testing, a new version of CaPA-RDPA system was implemented in November 2014 at CMC. This version is able to assimilate radar quantitative precipitation estimates (QPEs) from all 31 operational Canadian weather radars. The radar QPE is used as an observation source and not as a background field, and is subject to a strict quality control procedure, like any other observation source. The November 2014 upgrade to CaPA-RDPA was implemented at the same time as an upgrade to the RDPS system, which brought minor changes to the skill and bias of CaPA-RDPA. This paper uses the frequency bias indicator (FBI), the equitable threat score (ETS) and the departure from the partial mean (DPM) in order to assess the improvements to CaPA-RDPA brought by the assimilation of radar QPE. Verification focuses on the 6-h accumulations, and is done against a network of 65 synoptic stations (approximately two stations per radar) that were withheld from the station data assimilated by Ca

  5. Improving high-resolution quantitative precipitation estimation via fusion of multiple radar-based precipitation products

    Science.gov (United States)

    Rafieeinasab, Arezoo; Norouzi, Amir; Seo, Dong-Jun; Nelson, Brian

    2015-12-01

    For monitoring and prediction of water-related hazards in urban areas such as flash flooding, high-resolution hydrologic and hydraulic modeling is necessary. Because of large sensitivity and scale dependence of rainfall-runoff models to errors in quantitative precipitation estimates (QPE), it is very important that the accuracy of QPE be improved in high-resolution hydrologic modeling to the greatest extent possible. With the availability of multiple radar-based precipitation products in many areas, one may now consider fusing them to produce more accurate high-resolution QPE for a wide spectrum of applications. In this work, we formulate and comparatively evaluate four relatively simple procedures for such fusion based on Fisher estimation and its conditional bias-penalized variant: Direct Estimation (DE), Bias Correction (BC), Reduced-Dimension Bias Correction (RBC) and Simple Estimation (SE). They are applied to fuse the Multisensor Precipitation Estimator (MPE) and radar-only Next Generation QPE (Q2) products at the 15-min 1-km resolution (Experiment 1), and the MPE and Collaborative Adaptive Sensing of the Atmosphere (CASA) QPE products at the 15-min 500-m resolution (Experiment 2). The resulting fused estimates are evaluated using the 15-min rain gauge observations from the City of Grand Prairie in the Dallas-Fort Worth Metroplex (DFW) in north Texas. The main criterion used for evaluation is that the fused QPE improves over the ingredient QPEs at their native spatial resolutions, and that, at the higher resolution, the fused QPE improves not only over the ingredient higher-resolution QPE but also over the ingredient lower-resolution QPE trivially disaggregated using the ingredient high-resolution QPE. All four procedures assume that the ingredient QPEs are unbiased, which is not likely to hold true in reality even if real-time bias correction is in operation. To test robustness under more realistic conditions, the fusion procedures were evaluated with and

  6. nowCOAST's Map Service for NOAA Quantitative Precipitation Estimates (Time Enabled)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Map Information: This nowCOAST time-enabled map service provides maps depicting the NWS Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimate mosaics...

  7. An Ensemble Generator for Quantitative Precipitation Estimation Based on Censored Shifted Gamma Distributions

    Science.gov (United States)

    Wright, D.; Kirschbaum, D.; Yatheendradas, S.

    2016-12-01

    The considerable uncertainties associated with quantitative precipitation estimates (QPE), whether from satellite platforms, ground-based weather radar, or numerical weather models, suggest that such QPE should be expressed as distributions or ensembles of possible values, rather than as single values. In this research, we borrow a framework from the weather forecast verification community, to "correct" satellite precipitation and generate ensemble QPE. This approach is based on the censored shifted gamma distribution (CSGD). The probability of precipitation, central tendency (i.e. mean), and the uncertainty can be captured by the three parameters of the CSGD. The CSGD can then be applied for simulation of rainfall ensembles using a flexible nonlinear regression framework, whereby the CSGD parameters can be conditioned on one or more reference rainfall datasets and on other time-varying covariates such as modeled or measured estimates of precipitable water and relative humidity. We present the framework and initial results by generating precipitation ensembles based on the Tropical Rainfall Measuring Mission Multi-satellite Precipitation Analysis (TMPA) dataset, using both NLDAS and PERSIANN-CDR precipitation datasets as references. We also incorporate a number of covariates from MERRA2 reanalysis including model-estimated precipitation, precipitable water, relative humidity, and lifting condensation level. We explore the prospects for applying the framework and other ensemble error models globally, including in regions where high-quality "ground truth" rainfall estimates are lacking. We compare the ensemble outputs against those of an independent rain gage-based ensemble rainfall dataset. "Pooling" of regional rainfall observations is explored as one option for improving ensemble estimates of rainfall extremes. The approach has potential applications in near-realtime, retrospective, and scenario modeling of rainfall-driven hazards such as floods and landslides

  8. An Integrated Method of Multiradar Quantitative Precipitation Estimation Based on Cloud Classification and Dynamic Error Analysis

    Directory of Open Access Journals (Sweden)

    Yong Huang

    2017-01-01

    Full Text Available Relationships between radar reflectivity factor and rainfall are different in various precipitation cloud systems. In this study, the cloud systems are firstly classified into five categories with radar and satellite data to improve radar quantitative precipitation estimation (QPE algorithm. Secondly, the errors of multiradar QPE algorithms are assumed to be different in convective and stratiform clouds. The QPE data are then derived with methods of Z-R, Kalman filter (KF, optimum interpolation (OI, Kalman filter plus optimum interpolation (KFOI, and average calibration (AC based on error analysis on the Huaihe River Basin. In the case of flood on the early of July 2007, the KFOI is applied to obtain the QPE product. Applications show that the KFOI can improve precision of estimating precipitation for multiple precipitation types.

  9. The new approach of polarimetric attenuation correction for improving radar quantitative precipitation estimation(QPE)

    Science.gov (United States)

    Gu, Ji-Young; Suk, Mi-Kyung; Nam, Kyung-Yeub; Ko, Jeong-Seok; Ryzhkov, Alexander

    2016-04-01

    To obtain high-quality radar quantitative precipitation estimation data, reliable radar calibration and efficient attenuation correction are very important. Because microwave radiation at shorter wavelength experiences strong attenuation in precipitation, accounting for this attenuation is the essential work at shorter wavelength radar. In this study, the performance of different attenuation/differential attenuation correction schemes at C band is tested for two strong rain events which occurred in central Oklahoma. And also, a new attenuation correction scheme (combination of self-consistency and hot-spot concept methodology) that separates relative contributions of strong convective cells and the rest of the storm to the path-integrated total and differential attenuation is among the algorithms explored. A quantitative use of weather radar measurement such as rainfall estimation relies on the reliable attenuation correction. We examined the impact of attenuation correction on estimates of rainfall in heavy rain events by using cross-checking with S-band radar measurements which are much less affected by attenuation and compared the storm rain totals obtained from the corrected Z and KDP and rain gages in these cases. This new approach can be utilized at shorter wavelength radars efficiently. Therefore, it is very useful to Weather Radar Center of Korea Meteorological Administration preparing X-band research dual Pol radar network.

  10. Parameter estimation using the genetic algorithm and its impact on quantitative precipitation forecast

    Directory of Open Access Journals (Sweden)

    Y. H. Lee

    2006-12-01

    Full Text Available In this study, optimal parameter estimations are performed for both physical and computational parameters in a mesoscale meteorological model, and their impacts on the quantitative precipitation forecasting (QPF are assessed for a heavy rainfall case occurred at the Korean Peninsula in June 2005. Experiments are carried out using the PSU/NCAR MM5 model and the genetic algorithm (GA for two parameters: the reduction rate of the convective available potential energy in the Kain-Fritsch (KF scheme for cumulus parameterization, and the Asselin filter parameter for numerical stability. The fitness function is defined based on a QPF skill score. It turns out that each optimized parameter significantly improves the QPF skill. Such improvement is maximized when the two optimized parameters are used simultaneously. Our results indicate that optimizations of computational parameters as well as physical parameters and their adequate applications are essential in improving model performance.

  11. New service interface for River Forecasting Center derived quantitative precipitation estimates

    Science.gov (United States)

    Blodgett, David L.

    2013-01-01

    For more than a decade, the National Weather Service (NWS) River Forecast Centers (RFCs) have been estimating spatially distributed rainfall by applying quality-control procedures to radar-indicated rainfall estimates in the eastern United States and other best practices in the western United States to producea national Quantitative Precipitation Estimate (QPE) (National Weather Service, 2013). The availability of archives of QPE information for analytical purposes has been limited to manual requests for access to raw binary file formats that are difficult for scientists who are not in the climatic sciences to work with. The NWS provided the QPE archives to the U.S. Geological Survey (USGS), and the contents of the real-time feed from the RFCs are being saved by the USGS for incorporation into the archives. The USGS has applied time-series aggregation and added latitude-longitude coordinate variables to publish the RFC QPE data. Web services provide users with direct (index-based) data access, rendered visualizations of the data, and resampled raster representations of the source data in common geographic information formats.

  12. Predicting urban stormwater runoff with quantitative precipitation estimates from commercial microwave links

    Science.gov (United States)

    Pastorek, Jaroslav; Fencl, Martin; Stránský, David; Rieckermann, Jörg; Bareš, Vojtěch

    2017-04-01

    Reliable and representative rainfall data are crucial for urban runoff modelling. However, traditional precipitation measurement devices often fail to provide sufficient information about the spatial variability of rainfall, especially when heavy storm events (determining design of urban stormwater systems) are considered. Commercial microwave links (CMLs), typically very dense in urban areas, allow for indirect precipitation detection with desired spatial and temporal resolution. Fencl et al. (2016) recognised the high bias in quantitative precipitation estimates (QPEs) from CMLs which significantly limits their usability and, in order to reduce the bias, suggested a novel method for adjusting the QPEs to existing rain gauge networks. Studies evaluating the potential of CMLs for rainfall detection so far focused primarily on direct comparison of the QPEs from CMLs to ground observations. In contrast, this investigation evaluates the suitability of these innovative rainfall data for stormwater runoff modelling on a case study of a small ungauged (in long-term perspective) urban catchment in Prague-Letňany, Czech Republic (Fencl et al., 2016). We compare the runoff measured at the outlet from the catchment with the outputs of a rainfall-runoff model operated using (i) CML data adjusted by distant rain gauges, (ii) rainfall data from the distant gauges alone and (iii) data from a single temporary rain gauge located directly in the catchment, as it is common practice in drainage engineering. Uncertainties of the simulated runoff are analysed using the Bayesian method for uncertainty evaluation incorporating a statistical bias description as formulated by Del Giudice et al. (2013). Our results show that adjusted CML data are able to yield reliable runoff modelling results, primarily for rainfall events with convective character. Performance statistics, most significantly the timing of maximal discharge, reach better (less uncertain) values with the adjusted CML data

  13. The quantitative precipitation estimation system for Dallas-Fort Worth (DFW) urban remote sensing network

    Science.gov (United States)

    Chen, Haonan; Chandrasekar, V.

    2015-12-01

    The Dallas-Fort Worth (DFW) urban radar network consists of a combination of high resolution X band radars and a standard National Weather Service (NWS) Next-Generation Radar (NEXRAD) system operating at S band frequency. High spatiotemporal-resolution quantitative precipitation estimation (QPE) is one of the important applications of such a network. This paper presents a real-time QPE system developed by the Collaborative Adaptive Sensing of the Atmosphere (CASA) Engineering Research Center for the DFW urban region using both the high resolution X band radar network and the NWS S band radar observations. The specific dual-polarization radar rainfall algorithms at different frequencies (i.e., S- and X-band) and the fusion methodology combining observations at different temporal resolution are described. Radar and rain gauge observations from four rainfall events in 2013 that are characterized by different meteorological phenomena are used to compare the rainfall estimation products of the CASA DFW QPE system to conventional radar products from the national radar network provided by NWS. This high-resolution QPE system is used for urban flash flood mitigations when coupled with hydrological models.

  14. NEXRAD quantitative precipitation estimates, data acquisition, and processing for the DuPage County, Illinois, streamflow-simulation modeling system

    Science.gov (United States)

    Ortel, Terry W.; Spies, Ryan R.

    2015-11-19

    Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).

  15. Evaluation of radar-gauge merging methods for quantitative precipitation estimates

    Directory of Open Access Journals (Sweden)

    E. Goudenhoofdt

    2008-10-01

    Full Text Available Accurate quantitative precipitation estimates are of crucial importance for hydrological studies and applications. When spatial precipitation fields are required, rain gauge measurements are often combined with weather radar observations. In this paper, we evaluate several radar-gauge merging methods with various degrees of complexity: from mean field bias correction to geostatical merging techniques. The study area is the Walloon region of Belgium, which is mostly located in the Meuse catchment. Observations from a C-band Doppler radar and a dense rain gauge network are used to retrieve daily rainfall accumulations over this area. The relative performance of the different merging methods are assessed through a comparison against daily measurements from an independent gauge network. A 3-year verification is performed using several statistical quality parameters. It appears that the geostatistical merging methods perform best with the mean absolute error decreasing by 40% with respect to the original data. A mean field bias correction still achieves a reduction of 25%. A seasonal analysis shows that the benefit of using radar observations is particularly significant during summer. The effect of the network density on the performance of the methods is also investigated. For this purpose, a simple approach to remove gauges from a network is proposed. The analysis reveals that the sensitivity is relatively high for the geostatistical methods but rather small for the simple methods. The geostatistical methods give the best results for all network densities except for a very low density of 1 gauge per 500 km2 where a range-dependent adjustment complemented with a static local bias correction performs best.

  16. Quantitative precipitation estimation in complex orography using quasi-vertical profiles of dual polarization radar variables

    Science.gov (United States)

    Montopoli, Mario; Roberto, Nicoletta; Adirosi, Elisa; Gorgucci, Eugenio; Baldini, Luca

    2017-04-01

    Weather radars are nowadays a unique tool to estimate quantitatively the rain precipitation near the surface. This is an important task for a plenty of applications. For example, to feed hydrological models, mitigate the impact of severe storms at the ground using radar information in modern warning tools as well as aid the validation studies of satellite-based rain products. With respect to the latter application, several ground validation studies of the Global Precipitation Mission (GPM) products have recently highlighted the importance of accurate QPE from ground-based weather radars. To date, a plenty of works analyzed the performance of various QPE algorithms making use of actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization variables not only to ensure a good level of radar data quality but also as a direct input in the rain estimation equations. Among others, one of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution that affects at different levels, all the radar variables acquired as well as rain rates. This is particularly impactful in mountainous areas where the altitudes of the radar sampling is likely several hundred of meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested a in complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that make use of the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered because in that case all the radar variables used in the rain estimation process should be consistently extrapolated at the surface

  17. The impacts of climatological adjustment of quantitative precipitation estimates on the accuracy of flash flood detection

    Science.gov (United States)

    Zhang, Yu; Reed, Sean; Gourley, Jonathan J.; Cosgrove, Brian; Kitzmiller, David; Seo, Dong-Jun; Cifelli, Robert

    2016-10-01

    The multisensor Quantitative Precipitation Estimates (MQPEs) created by the US National Weather Service (NWS) are subject to a non-stationary bias. This paper quantifies the impacts of climatological adjustment of MQPEs alone, as well as the compound impacts of adjustment and model calibration, on the accuracy of simulated flood peak magnitude and that in detecting flood events. Our investigation is based on 19 watersheds in the mid-Atlantic region of US, which are grouped into small (500 km2) watersheds. NWS archival MQPEs over 1997-2013 for this region are adjusted to match concurrent gauge-based monthly precipitation accumulations. Then raw and adjusted MQPEs serve as inputs to the NWS distributed hydrologic model-threshold frequency framework (DHM-TF). Two experiments via DHM-TF are performed. The first one examines the impacts of adjustment alone through uncalibrated model simulations, whereas the second one focuses on the compound effects of adjustment and calibration on the detection of flood events. Uncalibrated model simulations show broad underestimation of flood peaks for small watersheds and overestimation those for large watersheds. Prior to calibration, adjustment alone tends to reduce the magnitude of simulated flood peaks for small and large basins alike, with 95% of all watersheds experienced decline over 2004-2013. A consequence is that a majority of small watersheds experience no improvement, or deterioration in bias (0% of basins experiencing improvement). By contrast, most (73%) of larger ones exhibit improved bias. Outcomes of the detection experiment show that the role of adjustment is not diminished by calibration for small watersheds, with only 25% of which exhibiting reduced bias after adjustment with calibrated parameters. Furthermore, it is shown that calibration is relatively effective in reducing false alarms (e.g., false alarm rate is down from 0.28 to 0.19 after calibration for small watersheds with calibrated parameters); but its

  18. Skill Assessment of An Hybrid Technique To Estimate Quantitative Precipitation Forecast For Galicia (nw Spain)

    Science.gov (United States)

    Lage, A.; Taboada, J. J.

    Precipitation is the most obvious of the weather elements in its effects on normal life. Numerical weather prediction (NWP) is generally used to produce quantitative precip- itation forecast (QPF) beyond the 1-3 h time frame. These models often fail to predict small-scale variations of rain because of spin-up problems and their coarse spatial and temporal resolution (Antolik, 2000). Moreover, there are some uncertainties about the behaviour of the NWP models in extreme situations (de Bruijn and Brandsma, 2000). Hybrid techniques, combining the benefits of NWP and statistical approaches in a flexible way, are very useful to achieve a good QPF. In this work, a new technique of QPF for Galicia (NW of Spain) is presented. This region has a percentage of rainy days per year greater than 50% with quantities that may cause floods, with human and economical damages. The technique is composed of a NWP model (ARPS) and a statistical downscaling process based on an automated classification scheme of at- mospheric circulation patterns for the Iberian Peninsula (J. Ribalaygua and R. Boren, 1995). Results show that QPF for Galicia is improved using this hybrid technique. [1] Antolik, M.S. 2000 "An Overview of the National Weather Service's centralized statistical quantitative precipitation forecasts". Journal of Hydrology, 239, pp:306- 337. [2] de Bruijn, E.I.F and T. Brandsma "Rainfall prediction for a flooding event in Ireland caused by the remnants of Hurricane Charley". Journal of Hydrology, 239, pp:148-161. [3] Ribalaygua, J. and Boren R. "Clasificación de patrones espaciales de precipitación diaria sobre la España Peninsular". Informes N 3 y 4 del Servicio de Análisis e Investigación del Clima. Instituto Nacional de Meteorología. Madrid. 53 pp.

  19. Quantitative precipitation estimation based on high-resolution numerical weather prediction and data assimilation with WRF – a performance test

    Directory of Open Access Journals (Sweden)

    Hans-Stefan Bauer

    2015-04-01

    Full Text Available Quantitative precipitation estimation and forecasting (QPE and QPF are among the most challenging tasks in atmospheric sciences. In this work, QPE based on numerical modelling and data assimilation is investigated. Key components are the Weather Research and Forecasting (WRF model in combination with its 3D variational assimilation scheme, applied on the convection-permitting scale with sophisticated model physics over central Europe. The system is operated in a 1-hour rapid update cycle and processes a large set of in situ observations, data from French radar systems, the European GPS network and satellite sensors. Additionally, a free forecast driven by the ECMWF operational analysis is included as a reference run representing current operational precipitation forecasting. The verification is done both qualitatively and quantitatively by comparisons of reflectivity, accumulated precipitation fields and derived verification scores for a complex synoptic situation that developed on 26 and 27 September 2012. The investigation shows that even the downscaling from ECMWF represents the synoptic situation reasonably well. However, significant improvements are seen in the results of the WRF QPE setup, especially when the French radar data are assimilated. The frontal structure is more defined and the timing of the frontal movement is improved compared with observations. Even mesoscale band-like precipitation structures on the rear side of the cold front are reproduced, as seen by radar. The improvement in performance is also confirmed by a quantitative comparison of the 24-hourly accumulated precipitation over Germany. The mean correlation of the model simulations with observations improved from 0.2 in the downscaling experiment and 0.29 in the assimilation experiment without radar data to 0.56 in the WRF QPE experiment including the assimilation of French radar data.

  20. Towards Quantitative Ocean Precipitation Validation

    Science.gov (United States)

    Klepp, C.; Bakan, S.; Andersson, A.

    2009-04-01

    A thorough knowledge of global ocean precipitation is an indispensable prerequisite for the understanding and successful modelling of the global climate system as it is an important component of the water cycle. However, reliable detection of quantitative precipitation over the global oceans, especially at high latitudes during the cold season remains a challenging task for remote sensing and model based estimates. Quantitative ship validation data using reliable instruments for measuring rain and snowfall hardly exist but are highly demanded for ground validation of such products. The satellite based HOAPS (Hamburg Ocean Atmosphere Parameters and Fluxes from Satellite Data) climatology contains fields of precipitation, evaporation and the resulting freshwater flux along with 12 additional atmospheric parameters over the global ice-free ocean between 1987 and 2005. Except for the NOAA Pathfinder SST, all basic state variables are calculated from SSM/I passive microwave radiometer measurements. HOAPS contains three main data subsets that originate from one common pixel-level data source. Gridded 0.5 degree monthly, pentad and twice daily data products are freely available from www.hoaps.org. Especially for North Atlantic mid-latitude mix-phase precipitation, the HOAPS precipitation retrieval has been investigated in some depth. This analysis revealed that the HOAPS retrieval qualitatively well represents cyclonic and intense mesoscale precipitation in agreement with ship observations and Cloudsat data, while GPCP, ECMWF forecast, ERA-40 and regional model data miss mesoscale precipitation substantially. As the differences between the investigated data sets are already large under mix-phase precipitation conditions, further work is carried out on snowfall validation during the cold season at high-latitudes. A Norwegian Sea field campaign in winter 2005 was carried out using an optical disdrometer capable of measuring quantitative amounts of snowfall over the ocean

  1. Merging Radar Quantitative Precipitation Estimates (QPEs) from the High-resolution NEXRAD Reanalysis over CONUS with Rain-gauge Observations

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.

    2015-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.

  2. Quantitative precipitation estimates for the northeastern Qinghai-Tibetan Plateau over the last 18,000 years

    Science.gov (United States)

    Li, Jianyong; Dodson, John; Yan, Hong; Cheng, Bo; Zhang, Xiaojian; Xu, Qinghai; Ni, Jian; Lu, Fengyan

    2017-05-01

    Quantitative information regarding the long-term variability of precipitation and vegetation during the period covering both the Late Glacial and the Holocene on the Qinghai-Tibetan Plateau (QTP) is scarce. Herein, we provide new and numerical reconstructions for annual mean precipitation (PANN) and vegetation history over the last 18,000 years using high-resolution pollen data from Lakes Dalianhai and Qinghai on the northeastern QTP. Hitherto, five calibration techniques including weighted averaging, weighted average-partial least squares regression, modern analogue technique, locally weighted weighted averaging regression, and maximum likelihood were first employed to construct robust inference models and to produce reliable PANN estimates on the QTP. The biomization method was applied for reconstructing the vegetation dynamics. The study area was dominated by steppe and characterized with a highly variable, relatively dry climate at 18,000-11,000 cal years B.P. PANN increased since the early Holocene, obtained a maximum at 8000-3000 cal years B.P. with coniferous-temperate mixed forest as the dominant biome, and thereafter declined to present. The PANN reconstructions are broadly consistent with other proxy-based paleoclimatic records from the northeastern QTP and the northern region of monsoonal China. The possible mechanisms behind the precipitation changes may be tentatively attributed to the internal feedback processes of higher latitude (e.g., North Atlantic) and lower latitude (e.g., subtropical monsoon) competing climatic regimes, which are primarily modulated by solar energy output as the external driving force. These findings may provide important insights into understanding the future Asian precipitation dynamics under the projected global warming.

  3. Evaluation of two "integrated" polarimetric Quantitative Precipitation Estimation (QPE) algorithms at C-band

    Science.gov (United States)

    Tabary, Pierre; Boumahmoud, Abdel-Amin; Andrieu, Hervé; Thompson, Robert J.; Illingworth, Anthony J.; Le Bouar, Erwan; Testud, Jacques

    2011-08-01

    SummaryTwo so-called "integrated" polarimetric rate estimation techniques, ZPHI ( Testud et al., 2000) and ZZDR ( Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term "integrated" means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282 R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h -1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R1.66), a -19% underestimation with ZPHI and a +23

  4. Mesoscale and Local Scale Evaluations of Quantitative Precipitation Estimates by Weather Radar Products during a Heavy Rainfall Event

    Directory of Open Access Journals (Sweden)

    Basile Pauthier

    2016-01-01

    Full Text Available A 24-hour heavy rainfall event occurred in northeastern France from November 3 to 4, 2014. The accuracy of the quantitative precipitation estimation (QPE by PANTHERE and ANTILOPE radar-based gridded products during this particular event, is examined at both mesoscale and local scale, in comparison with two reference rain-gauge networks. Mesoscale accuracy was assessed for the total rainfall accumulated during the 24-hour event, using the Météo France operational rain-gauge network. Local scale accuracy was assessed for both total event rainfall and hourly rainfall accumulations, using the recently developed HydraVitis high-resolution rain gauge network Evaluation shows that (1 PANTHERE radar-based QPE underestimates rainfall fields at mesoscale and local scale; (2 both PANTHERE and ANTILOPE successfully reproduced the spatial variability of rainfall at local scale; (3 PANTHERE underestimates can be significantly improved at local scale by merging these data with rain gauge data interpolation (i.e., ANTILOPE. This study provides a preliminary evaluation of radar-based QPE at local scale, suggesting that merged products are invaluable for applications at very high resolution. The results obtained underline the importance of using high-density rain-gauge networks to obtain information at high spatial and temporal resolution, for better understanding of local rainfall variation, to calibrate remotely sensed rainfall products.

  5. Improvement of Radar Quantitative Precipitation Estimation Based on Real-Time Adjustments to Z-R Relationships and Inverse Distance Weighting Correction Schemes

    Institute of Scientific and Technical Information of China (English)

    WANG Gaili; LIU Liping; DING Yuanyuan

    2012-01-01

    The errors in radar quantitative precipitation estimations consist not only of systematic biases caused by random noises but also spatially nonuniform biases in radar rainfall at individual rain-gauge stations.In this study,a real-time adjustment to the radar reflectivity-rainfall rates (Z R) relationship scheme and the gauge-corrected,radar-based,estimation scheme with inverse distance weighting interpolation was developed.Based on the characteristics of the two schemes,the two-step correction technique of radar quantitative precipitation estimation is proposed.To minimize the errors between radar quantitative precipitation estimations and rain gauge observations,a real-time adjustnent to the Z-R relationship scheme is used to remove systematic bias on the time-domain.The gauge-corrected,radar-based,estination scheme is then used to eliminate non-uniform errors in space.Based on radar data and rain gauge observations near the Huaihe River,the two-step correction technique was evaluated using two heavy-precipitation events.The results show that the proposed scheme improved not only in the underestination of rainfall but also reduced the root-mean-square error and the mean relative error of radar-rain gauge pairs.

  6. An operational weather radar-based Quantitative Precipitation Estimation and its application in catchment water resources modeling

    DEFF Research Database (Denmark)

    He, Xin; Vejen, Flemming; Stisen, Simon

    2011-01-01

    of precipitation compared with rain-gauge-based methods, thus providing the basis for better water resources assessments. The radar QPE algorithm called ARNE is a distance-dependent areal estimation method that merges radar data with ground surface observations. The method was applied to the Skjern River catchment...... reliable simulations of stream flow and water balance. The potential of using radar-based precipitation was found to be especially high at a smaller scale, where the impact of spatial resolution was evident from the stream discharge results. Also, groundwater recharge was shown to be sensitive...

  7. Precipitation evidences on X-Band Synthetic Aperture Radar imagery: an approach for quantitative detection and estimation

    Science.gov (United States)

    Mori, Saverio; Marzano, Frank S.; Montopoli, Mario; Pulvirenti, Luca; Pierdicca, Nazzareno

    2017-04-01

    al. 2014 and Mori et al. 2012); ancillary data, such as local incident angle and land cover, are used. This stage is necessary to tune the precipitation map stage and to avoid severe misinterpretations on the precipitation map routines. The second stage consist of estimating the local cloud attenuation. Finally the precipitation map is estimated, using the the retrieval algorithm developed by Marzano et al. (2011), applied only to pixels where rain is known to be present. Within the FP7 project EartH2Observe we have applied this methodology to 14 study cases, acquired within TSX and CSK missions over Italy and United States. This choice allows analysing both hurricane-like intense events and continental mid-latitude precipitations, with the possibility to verify and validate the proposed methodology through the available weather radar networks. Moreover it allows in same extent analysing the contribution of orography and quality of ancillary data (i.e. landcover). In this work we will discuss the results obtained until now in terms of improved rain cell localization and precipitation quantification.

  8. Improving Quantitative Precipitation Estimation via Data Fusion of High-Resolution Ground-based Radar Network and CMORPH Satellite-based Product

    Science.gov (United States)

    Cifelli, R.; Chen, H.; Chandrasekar, V.; Xie, P.

    2015-12-01

    A large number of precipitation products at multi-scales have been developed based upon satellite, radar, and/or rain gauge observations. However, how to produce optimal rainfall estimation for a given region is still challenging due to the spatial and temporal sampling difference of different sensors. In this study, we develop a data fusion mechanism to improve regional quantitative precipitation estimation (QPE) by utilizing satellite-based CMORPH product, ground radar measurements, as well as numerical model simulations. The CMORPH global precipitation product is essentially derived based on retrievals from passive microwave measurements and infrared observations onboard satellites (Joyce et al. 2004). The fine spatial-temporal resolution of 0.05o Lat/Lon and 30-min is appropriate for regional hydrologic and climate studies. However, it is inadequate for localized hydrometeorological applications such as urban flash flood forecasting. Via fusion of the Regional CMORPH product and local precipitation sensors, the high-resolution QPE performance can be improved. The area of interest is the Dallas-Fort Worth (DFW) Metroplex, which is the largest land-locked metropolitan area in the U.S. In addition to an NWS dual-polarization S-band WSR-88DP radar (i.e., KFWS radar), DFW hosts the high-resolution dual-polarization X-band radar network developed by the center for Collaborative Adaptive Sensing of the Atmosphere (CASA). This talk will present a general framework of precipitation data fusion based on satellite and ground observations. The detailed prototype architecture of using regional rainfall instruments to improve regional CMORPH precipitation product via multi-scale fusion techniques will also be discussed. Particularly, the temporal and spatial fusion algorithms developed for the DFW Metroplex will be described, which utilizes CMORPH product, S-band WSR-88DP, and X-band CASA radar measurements. In order to investigate the uncertainties associated with each

  9. Long-Term Quantitative Precipitation Estimates (QPE) at High Spatial and Temporal Resolution over CONUS: Bias-Adjustment of the Radar-Only National Mosaic and Multi-sensor QPE (NMQ/Q2) Precipitation Reanalysis (2001-2012)

    Science.gov (United States)

    Prat, Olivier; Nelson, Brian; Stevens, Scott; Seo, Dong-Jun; Kim, Beomgeun

    2015-04-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (NEXRAD) network over Continental United States (CONUS) is completed for the period covering from 2001 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Several in-situ datasets are available to assess the biases of the radar-only product and to adjust for those biases to provide a multi-sensor QPE. The rain gauge networks that are used such as the Global Historical Climatology Network-Daily (GHCN-D), the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), and the Climate Reference Network (CRN), have different spatial density and temporal resolution. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. The objective of this work is threefold. First, we investigate how the different in-situ networks can impact the precipitation estimates as a function of the spatial density, sensor type, and temporal resolution. Second, we assess conditional and un-conditional biases of the radar-only QPE for various time scales (daily, hourly, 5-min) using in-situ precipitation observations. Finally, after assessing the bias and applying reduction or elimination techniques, we are using a unique in-situ dataset merging the different RG networks (CRN, ASOS, HADS, GHCN-D) to

  10. Estimating Tropical Cyclone Precipitation from Station Observations

    Institute of Scientific and Technical Information of China (English)

    REN Fumin; WANG Yongmei; WANG Xiaoling; LI Weijing

    2007-01-01

    In this paper, an objective technique for estimating the tropical cyclone (TC) precipitation from station observations is proposed. Based on a comparison between the Original Objective Method (OOM) and the Expert Subjective Method (ESM), the Objective Synoptic Analysis Technique (OSAT) for partitioning TC precipitation was developed by analyzing the western North Pacific (WNP) TC historical track and the daily precipitation datasets. Being an objective way of the ESM, OSAT overcomes the main problems in OOM,by changing two fixed parameters in OOM, the thresholds for the distance of the absolute TC precipitation (D0) and the TC size (D1), into variable parameters.Case verification for OSAT was also carried out by applying CMORPH (Climate Prediction Center MORPHing technique) daily precipitation measurements, which is NOAA's combined satellite precipitation measurement system. This indicates that OSAT is capable of distinguishing simultaneous TC precipitation rain-belts from those associated with different TCs or with middle-latitude weather systems.

  11. Ensemble postprocessing for probabilistic quantitative precipitation forecasts

    Science.gov (United States)

    Bentzien, S.; Friederichs, P.

    2012-12-01

    Precipitation is one of the most difficult weather variables to predict in hydrometeorological applications. In order to assess the uncertainty inherent in deterministic numerical weather prediction (NWP), meteorological services around the globe develop ensemble prediction systems (EPS) based on high-resolution NWP systems. With non-hydrostatic model dynamics and without parameterization of deep moist convection, high-resolution NWP models are able to describe convective processes in more detail and provide more realistic mesoscale structures. However, precipitation forecasts are still affected by displacement errors, systematic biases and fast error growth on small scales. Probabilistic guidance can be achieved from an ensemble setup which accounts for model error and uncertainty of initial and boundary conditions. The German Meteorological Service (Deutscher Wetterdienst, DWD) provides such an ensemble system based on the German-focused limited-area model COSMO-DE. With a horizontal grid-spacing of 2.8 km, COSMO-DE is the convection-permitting high-resolution part of the operational model chain at DWD. The COSMO-DE-EPS consists of 20 realizations of COSMO-DE, driven by initial and boundary conditions derived from 4 global models and 5 perturbations of model physics. Ensemble systems like COSMO-DE-EPS are often limited with respect to ensemble size due to the immense computational costs. As a consequence, they can be biased and exhibit insufficient ensemble spread, and probabilistic forecasts may be not well calibrated. In this study, probabilistic quantitative precipitation forecasts are derived from COSMO-DE-EPS and evaluated at more than 1000 rain gauges located all over Germany. COSMO-DE-EPS is a frequently updated ensemble system, initialized 8 times a day. We use the time-lagged approach to inexpensively increase ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Moreover, we will show that statistical

  12. Regional Bias of Satellite Precipitation Estimates

    Science.gov (United States)

    Modrick, T. M.; Georgakakos, K. P.; Spencer, C. R.

    2012-12-01

    Satellite-based estimates of precipitation have improved the spatial availability of precipitation data particularly for regions with limited gauge networks due to limited accessibility or infrastructure. Understanding the quality and reliability of satellite precipitation estimates is important, especially when the estimates are utilitized for real-time hydrologic forecasting and for fast-responding phenomena. In partnership with the World Meteorological Organization (WMO), the U.S. Agency of International Development (USAID) and the National Ocean and Atmospheric Administration (NOAA), the Hydrologic Research Center has begun implementation of real-time flash flood warning systems for diverse regions around the world. As part of this effort, bias characteristics of satellite precipitation have been examined in these various regions, such includes portions of Southeastern Asia, Southeastern Europe, the Middle East, Central America, and the southern half of the African continent. The work has focused on the Global Hydro-Estimator (GHE) precipitation product from NOAA/NESDIS. These real-time systems utilize the GHE given low latency times of this product. This presentation focuses on the characterization of precipitation bias as compared to in-situ gauge records, and the regional variations or similarities. Additional analysis is currently underway considering regional bias for other satellite precipitation products (e.g., CMORPH) for comparison with the GHE results.

  13. Assessment of Quantitative Precipitation Forecasts from Operational NWP Models (Invited)

    Science.gov (United States)

    Sapiano, M. R.

    2010-12-01

    Previous work has shown that satellite and numerical model estimates of precipitation have complimentary strengths, with satellites having greater skill at detecting convective precipitation events and model estimates having greater skill at detecting stratiform precipitation. This is due in part to the challenges associated with retrieving stratiform precipitation from satellites and the difficulty in resolving sub-grid scale processes in models. These complimentary strengths can be exploited to obtain new merged satellite/model datasets, and several such datasets have been constructed using reanalysis data. Whilst reanalysis data are stable in a climate sense, they also have relatively coarse resolution compared to the satellite estimates (many of which are now commonly available at quarter degree resolution) and they necessarily use fixed forecast systems that are not state-of-the-art. An alternative to reanalysis data is to use Operational Numerical Weather Prediction (NWP) model estimates, which routinely produce precipitation with higher resolution and using the most modern techniques. Such estimates have not been combined with satellite precipitation and their relative skill has not been sufficiently assessed beyond model validation. The aim of this work is to assess the information content of the models relative to satellite estimates with the goal of improving techniques for merging these data types. To that end, several operational NWP precipitation forecasts have been compared to satellite and in situ data and their relative skill in forecasting precipitation has been assessed. In particular, the relationship between precipitation forecast skill and other model variables will be explored to see if these other model variables can be used to estimate the skill of the model at a particular time. Such relationships would be provide a basis for determining weights and errors of any merged products.

  14. Bayesian Processor of Output for Probabilistic Quantitative Precipitation Forecasting

    Science.gov (United States)

    Krzysztofowicz, R.; Maranzano, C. J.

    2006-05-01

    The Bayesian Processor of Output (BPO) is a new, theoretically-based technique for probabilistic forecasting of weather variates. It processes output from a numerical weather prediction (NWP) model and optimally fuses it with climatic data in order to quantify uncertainty about a predictand. The BPO is being tested by producing Probabilistic Quantitative Precipitation Forecasts (PQPFs) for a set of climatically diverse stations in the contiguous U.S. For each station, the PQPFs are produced for the same 6-h, 12-h, and 24-h periods up to 84- h ahead for which operational forecasts are produced by the AVN-MOS (Model Output Statistics technique applied to output fields from the Global Spectral Model run under the code name AVN). The inputs into the BPO are estimated as follows. The prior distribution is estimated from a (relatively long) climatic sample of the predictand; this sample is retrieved from the archives of the National Climatic Data Center. The family of the likelihood functions is estimated from a (relatively short) joint sample of the predictor vector and the predictand; this sample is retrieved from the same archive that the Meteorological Development Laboratory of the National Weather Service utilized to develop the AVN-MOS system. This talk gives a tutorial introduction to the principles and procedures behind the BPO, and highlights some results from the testing: a numerical example of the estimation of the BPO, and a comparative verification of the BPO forecasts and the MOS forecasts. It concludes with a list of demonstrated attributes of the BPO (vis- à-vis the MOS): more parsimonious definitions of predictors, more efficient extraction of predictive information, better representation of the distribution function of predictand, and equal or better performance (in terms of calibration and informativeness).

  15. How Well Can We Estimate Error Variance of Satellite Precipitation Data Around the World?

    Science.gov (United States)

    Gebregiorgis, A. S.; Hossain, F.

    2014-12-01

    The traditional approach to measuring precipitation by placing a probe on the ground will likely never be adequate or affordable in most parts of the world. Fortunately, satellites today provide a continuous global bird's-eye view (above ground) at any given location. However, the usefulness of such precipitation products for hydrological applications depends on their error characteristics. Thus, providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating satellite precipitation error variance using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. The goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the errors variance of satellite precipitation products. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.

  16. Precipitation sensitivity to warming estimated from long island records

    Science.gov (United States)

    Polson, D.; Hegerl, G. C.; Solomon, S.

    2016-07-01

    Some of the most damaging impacts of climate change are a consequence of changes to the global water cycle. Atmospheric warming causes the water cycle to intensify, increasing both atmospheric water vapor concentrations and global precipitation and enhancing existing patterns of precipitation minus evaporation (P - E). This relationship between temperature and precipitation therefore makes understanding how precipitation has changed with global temperatures in the past crucial for projecting changes with future warming. In situ observations cannot readily estimate global precipitation sensitivity to temperature (dP/dT), as land precipitation changes are affected by water limitation. Satellite observations of precipitation over ocean are only available after 1979, but studies based on them suggest a precipitation sensitivity over wet tropical (30N-30S) oceans that exceeds the Clausius-Clapeyron value. Here, we determine for the first time precipitation sensitivity using longer (1930-2005), island-based in situ observations to estimate dP/dT over islands. The records show a robust pattern of increasing precipitation in the tropics and decreasing precipitation in the subtropics, as predicted from physical arguments, and heavy precipitation shows a stronger sensitivity than mean precipitation over many islands. The pattern and magnitude of island-based dP/dT agree with climate models if masked to island locations, supporting model predictions of future changes.

  17. Evaluation of extreme precipitation estimates from TRMM in Angola

    Science.gov (United States)

    Pombo, Sandra; de Oliveira, Rodrigo Proença

    2015-04-01

    In situ ground observation measurement of precipitation is difficult in vast and sparsely populated areas, with poor road networks. This paper examines the use of remote sensors installed in satellites and evaluates the accuracy of TRMM 3B42 annual maximum daily precipitation estimates in Angola, in West Africa, a region where ground monitoring networks are generally. TRMM 3B42 estimates of annual maximum daily precipitation are compared to ground observation data from 159 locations. As a direct comparison between the two datasets for a common specific period and sites is not possible, a statistical approach was adopted to test the hypothesis that the TRMM 3B42 estimates and the ground monitoring records exhibit similar statistical characteristics. The study shows that the annual maximum daily precipitation estimates obtained from TRMM 3B42 slightly underestimate the quantiles obtained from the in situ observations. The use of remote sensing products to estimate extreme precipitation values for engineering design purposes is however promising. A maximum daily precipitation map for a return period of 20 years was computed and in the future, as the length of the remote sensing data series increases, it may be possible to estimate annual maximum daily precipitation estimates exclusively from these datasets for larger return periods. The paper also presents maps of the PdT/PDT ratios, where PdT is the annual maximum precipitation for a duration d and a return period of T years, and PDT is the annual maximum daily precipitation for a return period of T years. In conjunction with these maps it is possible to estimate the maximum precipitation for durations between 3 h and 5 days.

  18. Study of method for synthetic precipitation data for ungauged sites using quantitative precipitation model

    Science.gov (United States)

    Bae, Hyo-Jun; Oh, Jai-Ho

    2017-08-01

    A method was developed to estimate a synthetic precipitation record for ungauged sites using irregular coarse observations. The proposed synthetic precipitation data were produced with ultrahigh hourly resolution on a regular 1 × 1 km grid. The proposed method was used to analyze selected real-time observational data collected in South Korea from 2010 to the end of 2014. The observed precipitation data were measured using the Automatic Weather System and Automated Synoptic Observing System. The principal objective of the proposed method was to estimate the additional effects of orography on precipitation introduced by ultrahigh- resolution (1 × 1 km) topography provided by a digital elevation model. The Global Forecast System analysis of the National Centers for Environmental Prediction was used for the upper-atmospheric conditions, necessary for estimating the orographic effects. Precipitation data from 48 of the more than 600 observation sites used in the study, which matched the grid points of the synthetic data, were not included in the synthetic data estimation. Instead, these data were used to evaluate the proposed method by direct comparison with the real observations at these sites. A bias score was investigated by comparison of the synthetic precipitation data with the observations. In this comparison, the number of Hit, False, Miss, and Correct results for 2010-2014 was 74738, 25778, 7544, and 367981, respectively. In the Hit cases, the bias score was 1.22 and the correlation coefficient was 0.74. The means of the differences between the synthetic data and the observations were 0.3, -3.9, -14.4, and -34.9 mm h-1 and the root mean square errors (RMSEs) were 2.7, 8.3, 19.3, and 39.6 mm h-1 for the categories of 0.5-10.0, 10.0-30.0, 30.0-50.0, and 50.0-100.0 mm h-1, respectively. In addition, in each range, the 60% difference between the synthetic precipitation data and the observation data was -1.5 to +1.5, -5.0 to +5.0, -17.0 to +17.0, and -33.0 to +33

  19. Simple and approximate estimations of future precipitation return values

    Science.gov (United States)

    Benestad, Rasmus E.; Parding, Kajsa M.; Mezghani, Abdelkader; Dyrrdal, Anita V.

    2017-07-01

    We present estimates of future 20-year return values for 24 h precipitation based on multi-model ensembles of temperature projections and a crude method to quantify how warmer conditions may influence precipitation intensity. Our results suggest an increase by as much as 40-50 % projected for 2100 for a number of locations in Europe, assuming the high Representative Concentration Pathway (RCP) 8.5 emission scenario. The new strategy was based on combining physical understandings with the limited information available, and it utilised the covariance between the mean seasonal variations in precipitation intensity and the North Atlantic saturation vapour pressure. Rather than estimating the expected values and interannual variability, we tried to estimate an upper bound for the response in the precipitation intensity based on the assumption that the seasonal variations in the precipitation intensity are caused by the seasonal variations in temperature. Return values were subsequently derived from the estimated precipitation intensity through a simple and approximate scheme that combined the 1-year 24 h precipitation return values and downscaled annual wet-day mean precipitation for a 20-year event. The latter was based on the 95th percentile of a multi-model ensemble spread of downscaled climate model results. We found geographical variations in the shape of the seasonal cycle of the wet-day mean precipitation which suggest that different rain-producing mechanisms dominate in different regions. These differences indicate that the simple method used here to estimate the response of precipitation intensity to temperature was more appropriate for convective precipitation than for orographic rainfall.

  20. A New Method for Near Real Time Precipitation Estimates Using a Derived Statistical Relationship between Precipitable Water Vapor and Precipitation

    Science.gov (United States)

    Roman, J.

    2015-12-01

    The IPCC 5th Assessment found that the predicted warming of 1oC would increase the risk of extreme events such as heat waves, droughts, and floods. Weather extremes, like floods, have shown the vulnerability and susceptibility society has to these extreme weather events, through impacts such as disruption of food production, water supply, health, and damage of infrastructure. This paper examines a new way of near-real time forecasting of precipitation. A 10-year statistical climatological relationship was derived between precipitable water vapor (PWV) and precipitation by using the NASA Atmospheric Infrared Sounder daily gridded PWV product and the NASA Tropical Rainfall Measuring Mission daily gridded precipitation total. Forecasting precipitation estimates in real time is dire for flood monitoring and disaster management. Near real time PWV observations from AIRS on Aqua are available through the Goddard Earth Sciences Data and Information Service Center. In addition, PWV observations are available through direct broadcast from the NASA Suomi-NPP ATMS/CrIS instrument, the operational follow on to AIRS. The derived climatological relationship can be applied to create precipitation estimates in near real time by utilizing the direct broadcasting capabilities currently available in the CONUS region. The application of this relationship will be characterized through case-studies by using near real-time NASA AIRS Science Team v6 PWV products and ground-based SuomiNet GPS to estimate the current precipitation potential; the max amount of precipitation that can occur based on the moisture availability. Furthermore, the potential contribution of using the direct broadcasting of the NUCAPS ATMS/CrIS PWV products will be demonstrated. The analysis will highlight the advantages of applying this relationship in near-real time for flash flood monitoring and risk management. Relevance to the NWS River Forecast Centers will be discussed.

  1. A strategy for merging objective estimates of global daily precipitation from gauge observations, satellite estimates, and numerical predictions

    Science.gov (United States)

    Nie, Suping; Wu, Tongwen; Luo, Yong; Deng, Xueliang; Shi, Xueli; Wang, Zaizhi; Liu, Xiangwen; Huang, Jianbin

    2016-07-01

    This paper describes a strategy for merging daily precipitation information from gauge observations, satellite estimates (SEs), and numerical predictions at the global scale. The strategy is designed to remove systemic bias and random error from each individual daily precipitation source to produce a better gridded global daily precipitation product through three steps. First, a cumulative distribution function matching procedure is performed to remove systemic bias over gauge-located land areas. Then, the overall biases in SEs and model predictions (MPs) over ocean areas are corrected using a rescaled strategy based on monthly precipitation. Third, an optimal interpolation (OI)-based merging scheme (referred as the HL-OI scheme) is used to combine unbiased gauge observations, SEs, and MPs to reduce random error from each source and to produce a gauge—satellite-model merged daily precipitation analysis, called BMEP-d (Beijing Climate Center Merged Estimation of Precipitation with daily resolution), with complete global coverage. The BMEP-d data from a four-year period (2011-14) demonstrate the ability of the merging strategy to provide global daily precipitation of substantially improved quality. Benefiting from the advantages of the HL-OI scheme for quantitative error estimates, the better source data can obtain more weights during the merging processes. The BMEP-d data exhibit higher consistency with satellite and gauge source data at middle and low latitudes, and with model source data at high latitudes. Overall, independent validations against GPCP-1DD (GPCP one-degree daily) show that the consistencies between BMEP-d and GPCP-1DD are higher than those of each source dataset in terms of spatial pattern, temporal variability, probability distribution, and statistical precipitation events.

  2. Yesterday's Japan: A system of flood risk estimation over Japan with remote-sensing precipitation data

    Science.gov (United States)

    Kanae, S.; Seto, S.; Yoshimura, K.; Oki, T.

    2008-12-01

    A new river discharge prediction and hindcast system all over Japan in order to issue alerts of flood risks has been developed. It utilizes Japan Meteorological Agency"fs Meso-scale model outputs and remote-sensing precipitation data. A statistical approach that compromises the bias and uncertainty of models is proposed for interpreting the simulated river discharge as a flood risk. A 29-year simulation was implemented to estimate parameters of the Gumbel distribution for the probability of extreme discharge, and the estimated discharge probability index (DPI) showed good agreement with that estimated based on observations. Even more strikingly, high DPI in the simulation corresponded to actual flood damage records. This indicates that the real-time simulation of the DPI could potentially provide reasonable flood warnings. A method to overcome the lack of sufficiently long simulation data through the use of a pre-existing long-term simulation and to estimate statistical parameters is also proposed. A preliminary flood risk prediction that used operational weather forecast data for 2003 and 2004 gave results similar to those of the 29-year simulation for the Typhoon T0423 event on October 2004, demonstrating the transferability of the technique to real-time prediction. In addition, the usefulness of satellite precipitation data for the flood estimation is evaluated via hindcast. We conducted it using several precipitation satellite datasets. The GSMaP product can detect heavy precipitation events, but floods being not well simulated in many cases because of GSMAP"fs underestimation. The GSMaP product adjusted by using monthly and 1 degree rain gauge information can be used to detect flood events as well as hourly rain gauge observations. Another quantitative issue is also disscussed. When a remote-sensing based precipitation data is used as an input for hindcast, we are suffering from underestimation of precipitation amount. The effort for improvement will be shown

  3. Application of quantitative precipitation forecasting and precipitation ensemble prediction for hydrological forecasting

    OpenAIRE

    Tao, P.; Tie-Yuan, S.; Zhi-Yuan, Y.; Jun-Chao, W.

    2015-01-01

    The precipitation in the forecast period influences flood forecasting precision, due to the uncertainty of the input to the hydrological model. Taking the ZhangHe basin as the example, the research adopts the precipitation forecast and ensemble precipitation forecast product of the AREM model, uses the Xin Anjiang hydrological model, and tests the flood forecasts. The results show that the flood forecast result can be clearly improved when considering precipitation during the forecast period....

  4. Application of quantitative precipitation forecasting and precipitation ensemble prediction for hydrological forecasting

    Directory of Open Access Journals (Sweden)

    P. Tao

    2015-05-01

    Full Text Available The precipitation in the forecast period influences flood forecasting precision, due to the uncertainty of the input to the hydrological model. Taking the ZhangHe basin as the example, the research adopts the precipitation forecast and ensemble precipitation forecast product of the AREM model, uses the Xin Anjiang hydrological model, and tests the flood forecasts. The results show that the flood forecast result can be clearly improved when considering precipitation during the forecast period. Hydrological forecast based on Ensemble Precipitation prediction gives better hydrological forecast information, better satisfying the need for risk information for flood prevention and disaster reduction, and has broad development opportunities.

  5. Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics

    CERN Document Server

    Scheuerer, Michael

    2013-01-01

    Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the Germa...

  6. Application of Multi-Scale Tracking Radar Echoes Scheme in Quantitative Precipitation Nowcasting

    Institute of Scientific and Technical Information of China (English)

    WANG Gaili; WONG Waikin; LIU Liping; WANG Hongyan

    2013-01-01

    A new radar echo tracking algorithm known as multi-scale tracking radar echoes by cross-correlation (MTREC) was developed in this study to analyze movements of radar echoes at different spatial scales.Movement of radar echoes,particularly associated with convective storms,exhibits different characteristics at various spatial scales as a result of complex interactions among meteorological systems leading to the formation of convective storms.For the null echo region,the usual correlation technique produces zero or a very small magnitude of motion vectors.To mitigate these constraints,MTREC uses the tracking radar echoes by correlation (TREC) technique with a large "box" to determine the systematic movement driven by steering wind,and MTREC applies the TREC technique with a small "box" to estimate small-scale internal motion vectors.Eventually,the MTREC vectors are obtained by synthesizing the systematic motion and the small scale internal motion.Performance of the MTREC technique was compared with TREC technique using case studies:the Khanun typhoon on 11 September 2005 observed by Wenzhou radar and a squall-line system on 23 June 2011 detected by Beijing radar.The results demonstrate that more spatially smoothed and continuous vector fields can be generated by the MTREC technique,which leads to improvements in tracking the entire radar reflectivity pattern.The new multi-scale tracking scheme was applied to study its impact on the performance of quantitative precipitation nowcasting.The location and intensity of heavy precipitation at a 1-h lead time was more consistent with quantitative precipitation estimates using radar and rain gauges.

  7. Estimation of the characteristic energy of electron precipitation

    Directory of Open Access Journals (Sweden)

    C. F. del Pozo

    Full Text Available Data from simultaneous observations (on 13 February 1996, 9 November 1998, and 12 February 1999 with the IRIS, DASI and EISCAT systems are employed in the study of the energy distribution of the electron precipitation during substorm activity. The estimation of the characteristic energy of the electron precipitation over the common field of view of IRIS and DASI is discussed. In particular, we look closely at the physical basis of the correspondence between the characteristic energy, the flux-averaged energy, as defined below, and the logarithm of the ratio of the green-light intensity to the square of absorption. This study expands and corrects results presented in the paper by Kosch et al. (2001. It is noticed, moreover, that acceleration associated with diffusion processes in the magnetosphere long before precipitation may be controlling the shape of the energy spectrum. We propose and test a "mixed" distribution for the energy-flux spectrum, exponential at the lower energies and Maxwellian or modified power-law at the higher energies, with a threshold energy separating these two regimes. The energy-flux spectrum at Tromsø, in the 1–320 keV range, is derived from EISCAT electron density profiles in the 70–140 km altitude range and is applied in the "calibration" of the optical intensity and absorption distributions, in order to extrapolate the flux and characteristic energy maps.

    Key words. Ionosphere (auroral ionosphere; particle precipitation; particle acceleration

  8. Quantitative evaluation of precipitation gauge bird deterrent mechanisms

    Science.gov (United States)

    Cobos, Doug; Taysom, Nathan

    2017-04-01

    "Contamination" from avian activity is a constant contributor to inaccurate measurements and often complete failure in precipitation gauges. Various bird deterrent schemes have been deployed both by rain gauge manufacturers and by individual research groups, but few data exist regarding the effectiveness of the deterrent mechanisms outside of the anecdotal claims that "none of them work well." We have recently re-purposed a commercial trail camera, commonly used to record images of game animals, to quantify the effectiveness of bird deterrent devices on our ATMOS 31 precipitation gauge. Our initial attempts using a 3-D printed ring and spike design actually made the precipitation gauge more attractive to birds, but subsequent iterations have substantially improved bird deterrent effectiveness.

  9. Combination of radar and daily precipitation data to estimate meaningful sub-daily point precipitation extremes

    Science.gov (United States)

    Bárdossy, András; Pegram, Geoffrey

    2017-01-01

    The use of radar measurements for the space time estimation of precipitation has for many decades been a central topic in hydro-meteorology. In this paper we are interested specifically in daily and sub-daily extreme values of precipitation at gauged or ungauged locations which are important for design. The purpose of the paper is to develop a methodology to combine daily precipitation observations and radar measurements to estimate sub-daily extremes at point locations. Radar data corrected using precipitation-reflectivity relationships lead to biased estimations of extremes. Different possibilities of correcting systematic errors using the daily observations are investigated. Observed gauged daily amounts are interpolated to unsampled points and subsequently disaggregated using the sub-daily values obtained by the radar. Different corrections based on the spatial variability and the subdaily entropy of scaled rainfall distributions are used to provide unbiased corrections of short duration extremes. Additionally a statistical procedure not based on a matching day by day correction is tested. In this last procedure as we are only interested in rare extremes, low to medium values of rainfall depth were neglected leaving a small number of L days of ranked daily maxima in each set per year, whose sum typically comprises about 50% of each annual rainfall total. The sum of these L day maxima is first iterpolated using a Kriging procedure. Subsequently this sum is disaggregated to daily values using a nearest neighbour procedure. The daily sums are then disaggregated by using the relative values of the biggest L radar based days. Of course, the timings of radar and gauge maxima can be different, so the method presented here uses radar for disaggregating daily gauge totals down to 15 min intervals in order to extract the maxima of sub-hourly through to daily rainfall. The methodologies were tested in South Africa, where an S-band radar operated relatively continuously at

  10. GPS Estimates of Integrated Precipitable Water Aid Weather Forecasters

    Science.gov (United States)

    Moore, Angelyn W.; Gutman, Seth I.; Holub, Kirk; Bock, Yehuda; Danielson, David; Laber, Jayme; Small, Ivory

    2013-01-01

    Global Positioning System (GPS) meteorology provides enhanced density, low-latency (30-min resolution), integrated precipitable water (IPW) estimates to NOAA NWS (National Oceanic and Atmospheric Adminis tration Nat ional Weather Service) Weather Forecast Offices (WFOs) to provide improved model and satellite data verification capability and more accurate forecasts of extreme weather such as flooding. An early activity of this project was to increase the number of stations contributing to the NOAA Earth System Research Laboratory (ESRL) GPS meteorology observing network in Southern California by about 27 stations. Following this, the Los Angeles/Oxnard and San Diego WFOs began using the enhanced GPS-based IPW measurements provided by ESRL in the 2012 and 2013 monsoon seasons. Forecasters found GPS IPW to be an effective tool in evaluating model performance, and in monitoring monsoon development between weather model runs for improved flood forecasting. GPS stations are multi-purpose, and routine processing for position solutions also yields estimates of tropospheric zenith delays, which can be converted into mm-accuracy PWV (precipitable water vapor) using in situ pressure and temperature measurements, the basis for GPS meteorology. NOAA ESRL has implemented this concept with a nationwide distribution of more than 300 "GPSMet" stations providing IPW estimates at sub-hourly resolution currently used in operational weather models in the U.S.

  11. Antecedent precipitation index determined from CST estimates of rainfall

    Science.gov (United States)

    Martin, David W.

    1992-01-01

    This paper deals with an experimental calculation of a satellite-based antecedent precipitation index (API). The index is also derived from daily rain images produced from infrared images using an improved version of GSFC's Convective/Stratiform Technique (CST). API is a measure of soil moisture, and is based on the notion that the amount of moisture in the soil at a given time is related to precipitation at earlier times. Four different CST programs as well as the Geostationary Operational Enviroment Satellite (GOES) Precipitation Index developed by Arkin in 1979 are compared to experimental results, for the Mississippi Valley during the month of July. Rain images are shown for the best CST code and the ARK program. Comparisons are made as to the accuracy and detail of the results for the two codes. This project demonstrates the feasibility of running the CST on a synoptic scale. The Mississippi Valley case is well suited for testing the feasibility of monitoring soil moisture by means of CST. Preliminary comparisons of CST and ARK indicate significant differences in estimates of rain amount and distribution.

  12. Interannual Variability of Tropical Precipitation: How Well Do Climate Models Agree With Current Satellite Estimates?

    Science.gov (United States)

    Robertson, Franklin R.; Marshall, Susan; Roads, John; Oglesby, Robert J.; Fitzjarrald, Dan; Goodman, H. Michael (Technical Monitor)

    2001-01-01

    Since the beginning of the World Climate Research Program's Global Precipitation Climatology Project (GPCP) satellite remote sensing of precipitation has made dramatic improvements, particularly for tropical regions. Data from microwave and infrared sensors now form the most critical input to precipitation data sets and can be calibrated with surface gauges to so that the strengths of each data source can be maximized in some statistically optimal sense. Recent availability of the TRMM (Tropical Rainfall Measuring Mission) has further aided in narrowing uncertainties in rainfall over die tropics and subtropics. Although climate modeling efforts have long relied on space-based precipitation estimates for validation, we now are in a position to make more quantitative assessments of model performance, particularly in tropical regions. An integration of the CCM3 using observed SSTs as a lower boundary condition is used to examine how well this model responds to ENSO forcing in terms of anomalous precipitation. An integration of the NCEP spectral model used for the Reanalysis-H effort is also examined. This integration is run with specified SSTs, but with no data assimilation. Our analysis focuses on two aspects of inter-annual variability. First are the spatial anomalies that are indicative of dislocations in Hadley and Walker circulations. Second, we consider the ability of models to replicate observed increases in oceanic precipitation that are noted in satellite observations for large ENSO events. Finally, we consider a slab ocean version of the CCM3 model with prescribed ocean beat transports that mimic upwelling anomalies, but which still allows the surface energy balance to be predicted. This less restrictive experiment is used to understand why model experiments with specified SSTs seem to have noticeably less interannual variability in precipitation than do the satellite observations.

  13. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    Science.gov (United States)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially

  14. Quantitative analysis of Bordeaux red wine precipitates by solid-state NMR: Role of tartrates and polyphenols.

    Science.gov (United States)

    Prakash, Shipra; Iturmendi, Nerea; Grelard, Axelle; Moine, Virginie; Dufourc, Erick

    2016-05-15

    Stability of wines is of great importance in oenology matters. Quantitative estimation of dark red precipitates formed in Merlot and Cabernet Sauvignon wine from Bordeaux region for vintages 2012 and 2013 was performed during the oak barrel ageing process. Precipitates were obtained by placing wine at -4°C or 4°C for 2-6 days and monitored by periodic sampling during a one-year period. Spectroscopic identification of the main families of components present in the precipitate powder was performed with (13)C solid-state CPMAS NMR and 1D and 2D solution NMR of partially water re-solubilized precipitates. The study revealed that the amount of precipitate obtained is dependent on vintage, temperature and grape variety. Major components identified include potassium bitartrate, polyphenols, polysaccharides, organic acids and free amino acids. No evidence was found for the presence of proteins. The influence of main compounds found in the precipitates is discussed in relation to wine stability.

  15. Application of evolutionary computation on ensemble forecast of quantitative precipitation

    Science.gov (United States)

    Dufek, Amanda S.; Augusto, Douglas A.; Dias, Pedro L. S.; Barbosa, Helio J. C.

    2017-09-01

    An evolutionary computation algorithm known as genetic programming (GP) has been explored as an alternative tool for improving the ensemble forecast of 24-h accumulated precipitation. Three GP versions and six ensembles' languages were applied to several real-world datasets over southern, southeastern and central Brazil during the rainy period from October to February of 2008-2013. According to the results, the GP algorithms performed better than two traditional statistical techniques, with errors 27-57% lower than simple ensemble mean and the MASTER super model ensemble system. In addition, the results revealed that GP algorithms outperformed the best individual forecasts, reaching an improvement of 34-42%. On the other hand, the GP algorithms had a similar performance with respect to each other and to the Bayesian model averaging, but the former are far more versatile techniques. Although the results for the six ensembles' languages are almost indistinguishable, our most complex linear language turned out to be the best overall proposal. Moreover, some meteorological attributes, including the weather patterns over Brazil, seem to play an important role in the prediction of daily rainfall amount.

  16. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    OpenAIRE

    Hyojin Lee; Kwangmin Kang

    2015-01-01

    Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data...

  17. 3800 Years of Quantitative Precipitation Reconstruction from the Northwest Yucatan Peninsula

    Science.gov (United States)

    Carrillo-Bastos, Alicia; Islebe, Gerald A.; Torrescano-Valle, Nuria

    2013-01-01

    Precipitation over the last 3800 years has been reconstructed using modern pollen calibration and precipitation data. A transfer function was then performed via the linear method of partial least squares. By calculating precipitation anomalies, it is estimated that precipitation deficits were greater than surpluses, reaching 21% and <9%, respectively. The period from 50 BC to 800 AD was the driest of the record. The drought related to the abandonment of the Maya Preclassic period featured a 21% reduction in precipitation, while the drought of the Maya collapse (800 to 860 AD) featured a reduction of 18%. The Medieval Climatic Anomaly was a period of positive phases (3.8–7.6%). The Little Ice Age was a period of climatic variability, with reductions in precipitation but without deficits. PMID:24391940

  18. Uncertainty Estimation of Global Precipitation Measurement through Objective Validation Strategy

    Science.gov (United States)

    KIM, H.; Utsumi, N.; Seto, S.; Oki, T.

    2014-12-01

    Since Tropical Rainfall Measuring Mission (TRMM) has been launched in 1997 as the first satellite mission dedicated to measuring precipitation, the spatiotemporal gaps of precipitation observation have been filled significantly. On February 27th, 2014, Dual-frequency Precipitation Radar (DPR) satellite has been launched as a core observatory of Global Precipitation Measurement (GPM), an international multi-satellite mission aiming to provide the global three hourly map of rainfall and snowfall. In addition to Ku-band, Ka-band radar is newly equipped, and their combination is expected to introduce higher precision than the precipitation measurement of TRMM/PR. In this study, the GPM level-2 orbit products are evaluated comparing to various precipitation observations which include TRMM/PR, in-situ data, and ground radar. In the preliminary validation over intercross orbits of DPR and TRMM, Ku-band measurements in both satellites shows very close spatial pattern and intensity, and the DPR is capable to capture broader range of precipitation intensity than of the TRMM. Furthermore, we suggest a validation strategy based on 'objective classification' of background atmospheric mechanisms. The Japanese 55-year Reanalysis (JRA-55) and auxiliary datasets (e.g., tropical cyclone best track) is used to objectively determine the types of precipitation. Uncertainty of abovementioned precipitation products is quantified as their relative differences and characterized for different precipitation mechanism. Also, it is discussed how the uncertainty affects the synthesis of TRMM and GPM for a long-term satellite precipitation observation records which is internally consistent.

  19. A quantitative phase field model for hydride precipitation in zirconium alloys: Part II. Modeling of temperature dependent hydride precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Zhihua [The Hong Kong Polytechnic University, Shenzhen Research Institute, Shenzhen (China); PolyU Base (Shenzhen) Limited, Shenzhen (China); Department of Mechanical Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong (China); Hao, Mingjun [The Hong Kong Polytechnic University, Shenzhen Research Institute, Shenzhen (China); Department of Mechanical Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong (China); Guo, Xianghua [State Key Laboratory of Explosion and Safety Science, Beijing Institute of Technology, Beijing 100081 (China); Tang, Guoyi [Advanced Materials Institute, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055 (China); Shi, San-Qiang, E-mail: mmsqshi@polyu.edu.hk [The Hong Kong Polytechnic University, Shenzhen Research Institute, Shenzhen (China); PolyU Base (Shenzhen) Limited, Shenzhen (China); Department of Mechanical Engineering, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong (China)

    2015-04-15

    A quantitative free energy functional developed in Part I (Shi and Xiao, 2014 [1]) was applied to model temperature dependent δ-hydride precipitation in zirconium in real time and real length scale. At first, the effect of external tensile load on reorientation of δ-hydrides was calibrated against experimental observations, which provides a modification factor for the strain energy in free energy formulation. Then, two types of temperature-related problems were investigated. In the first type, the effect of temperature transient was studied by cooling the Zr–H system at different cooling rates from high temperature while an external tensile stress was maintained. At the end of temperature transients, the average hydride size as a function of cooling rate was compared to experimental data. In the second type, the effect of temperature gradients was studied in a one or two dimensional temperature field. Different boundary conditions were applied. The results show that the hydride precipitation concentrated in low temperature regions and that it eventually led to the formation of hydride blisters in zirconium. A brief discussion on how to implement the hysteresis of hydrogen solid solubility on hydride precipitation and dissolution in the developed phase field scheme is also presented.

  20. Deficiencies in quantitative precipitation forecasts. Sensitivity studies using the COSMO model

    Energy Technology Data Exchange (ETDEWEB)

    Dierer, Silke [Federal Office of Meteorology and Climatology, MeteoSwiss, Zurich (Switzerland); Meteotest, Bern (Switzerland); Arpagaus, Marco [Federal Office of Meteorology and Climatology, MeteoSwiss, Zurich (Switzerland); Seifert, Axel [Deutscher Wetterdienst, Offenbach (Germany); Avgoustoglou, Euripides [Hellenic National Meteorological Service, Hellinikon (Greece); Dumitrache, Rodica [National Meteorological Administration, Bucharest (Romania); Grazzini, Federico [Agenzia Regionale per la Protezione Ambientale Emilia Romagna, Bologna (Italy); Mercogliano, Paola [Italian Aerospace Research Center, Capua (Italy); Milelli, Massimo [Agenzia Regionale per la Protezione Ambientale Piemonte, Torino (Italy); Starosta, Katarzyna [Inst. of Meteorology and Water Management, Warsaw (Poland)

    2009-12-15

    The quantitative precipitation forecast (QPF) of the COSMO model, like of other models, reveals some deficiencies. The aim of this study is to investigate which physical and numerical schemes have the strongest impact on QPF and, thus, have the highest potential for improving QPF. Test cases are selected that are meant to reflect typical forecast errors in different countries. The 13 test cases fall into two main groups: overestimation of stratiform precipitation (6 cases) and underestimation of convective precipitation (5 cases). 22 sensitivity experiments predominantly regarding numerical and physical schemes are performed. The area averaged 24 h precipitation sums arc evaluated. The results show that the strongest impact on QPF is caused by changes of the initial atmospheric humidity and by using the Kain-Fritsch/Bechtold convection scheme instead of the Tiedtke scheme. Both sensitivity experiments change the area averaged precipitation in the range of 30-35%. This clearly shows that improved simulation of atmospheric water vapour is of utmost importance to achieve better precipitation forecasts. Significant changes are also caused by using the Runge-Kutta time integration scheme instead of the Leapfrog scheme, by applying a modified warm rain and snow physics scheme or a modified Tiedtke convection scheme. The fore-mentioned changes result in differences of area averaged precipitation of roughly 20%. Only for Greek lest cases, which all have a strong influence from the sea, the heat and moisture exchange between surface and atmosphere is of great importance and can cause changes of up to 20%. (orig.)

  1. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  2. Precipitation Estimation Using Combined Radar/Radiometer Measurements Within the GPM Framework

    Science.gov (United States)

    Hou, Arthur

    2012-01-01

    The Global Precipitation Measurement (GPM) Mission is an international satellite mission specifically designed to unify and advance precipitation measurements from a constellation of research and operational microwave sensors. The GPM mission centers upon the deployment of a Core Observatory in a 65o non-Sun-synchronous orbit to serve as a physics observatory and a transfer standard for intersatellite calibration of constellation radiometers. The GPM Core Observatory will carry a Ku/Ka-band Dual-frequency Precipitation Radar (DPR) and a conical-scanning multi-channel (10-183 GHz) GPM Microwave Radiometer (GMI). The DPR will be the first dual-frequency radar in space to provide not only measurements of 3-D precipitation structures but also quantitative information on microphysical properties of precipitating particles needed for improving precipitation retrievals from microwave sensors. The DPR and GMI measurements will together provide a database that relates vertical hydrometeor profiles to multi-frequency microwave radiances over a variety of environmental conditions across the globe. This combined database will be used as a common transfer standard for improving the accuracy and consistency of precipitation retrievals from all constellation radiometers. For global coverage, GPM relies on existing satellite programs and new mission opportunities from a consortium of partners through bilateral agreements with either NASA or JAXA. Each constellation member may have its unique scientific or operational objectives but contributes microwave observations to GPM for the generation and dissemination of unified global precipitation data products. In addition to the DPR and GMI on the Core Observatory, the baseline GPM constellation consists of the following sensors: (1) Special Sensor Microwave Imager/Sounder (SSMIS) instruments on the U.S. Defense Meteorological Satellite Program (DMSP) satellites, (2) the Advanced Microwave Scanning Radiometer-2 (AMSR-2) on the GCOM-W1

  3. Comparing 20 years of precipitation estimates from different sources over the world ocean

    Science.gov (United States)

    Béranger, Karine; Barnier, Bernard; Gulev, Sergei; Crépon, Michel

    2006-06-01

    their common period of 14 years, from 1980 to 1993. Then CMAP is compared to GPCP over the 1988 1995 period and to HOAPS over the 1992 1998 period. Usual diagnostics, like comparison of the precipitation patterns exhibited in the annual climatological means of zonal averages and global budget, are used to investigate differences between the various precipitation fields. In addition, precipitation rates are spatially integrated over 16 regional boxes, which are representative of the major ocean gyres or large-scale ocean circulation patterns. Seasonal and inter-annual variations are studied over these boxes in terms of time series anomalies or correlation coefficients. The analysis attempts to characterise differences and biases according to the original source of data (i.e. in situ or satellite, etc.). Qualitative agreement can be observed in all climatologies, which reproduce the major characteristics of the precipitation patterns over the oceans. However, great disagreements occur in terms of quantitative values and regional patterns, especially in regions of high precipitation. However, a better agreement is generally found in the northern hemisphere. The most significant differences, observed between data sets in the mean seasonal cycles and interannual variations, are discussed. A major result of the paper, which was not expected a priori, is that differences between data sets are much more dependent upon the ocean region that is considered than upon the origin of the data sets (in situ vs satellite vs model, etc.). Our analysis did not provide enough objective elements, which would allow us to clearly recommend a given data set as reference or best estimate. However, composite data sets (GPCP, and especially CMAP), because they never appeared to be really “off” when compared to other data sets, may represent the best recent data set available. CMAP would certainly be our first choice to drive an ocean GCM.

  4. A spatial approach to the modelling and estimation of areal precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Skaugen, T

    1996-12-31

    In hydroelectric power technology it is important that the mean precipitation that falls in an area can be calculated. This doctoral thesis studies how the morphology of rainfall, described by the spatial statistical parameters, can be used to improve interpolation and estimation procedures. It attempts to formulate a theory which includes the relations between the size of the catchment and the size of the precipitation events in the modelling of areal precipitation. The problem of estimating and modelling areal precipitation can be formulated as the problem of estimating an inhomogeneously distributed flux of a certain spatial extent being measured at points in a randomly placed domain. The information contained in the different morphology of precipitation types is used to improve estimation procedures of areal precipitation, by interpolation (kriging) or by constructing areal reduction factors. A new approach to precipitation modelling is introduced where the analysis of the spatial coverage of precipitation at different intensities plays a key role in the formulation of a stochastic model for extreme areal precipitation and in deriving the probability density function of areal precipitation. 127 refs., 30 figs., 13 tabs.

  5. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...... the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width...... of the prediction bands can be questioned, due to the subjective nature of the method. Moreover, the method also gives very useful information about the model and parameter behaviour....

  6. Validating quantitative precipitation forecast for the Flood Meteorological Office, Patna region during 2011-2014

    Science.gov (United States)

    Giri, R. K.; Panda, Jagabandhu; Rath, Sudhansu S.; Kumar, Ravindra

    2016-06-01

    In order to issue an accurate warning for flood, a better or appropriate quantitative forecasting of precipitation is required. In view of this, the present study intends to validate the quantitative precipitation forecast (QPF) issued during southwest monsoon season for six river catchments (basin) under the flood meteorological office, Patna region. The forecast is analysed statistically by computing various skill scores of six different precipitation ranges during the years 2011-2014. The analysis of QPF validation indicates that the multi-model ensemble (MME) based forecasting is more reliable in the precipitation ranges of 1-10 and 11-25 mm. However, the reliability decreases for higher ranges of rainfall and also for the lowest range, i.e., below 1 mm. In order to testify synoptic analogue method based MME forecasting for QPF during an extreme weather event, a case study of tropical cyclone Phailin is performed. It is realized that in case of extreme events like cyclonic storms, the MME forecasting is qualitatively useful for issue of warning for the occurrence of floods, though it may not be reliable for the QPF. However, QPF may be improved using satellite and radar products.

  7. Investigation and prediction of protein precipitation by polyethylene glycol using quantitative structure-activity relationship models.

    Science.gov (United States)

    Hämmerling, Frank; Ladd Effio, Christopher; Andris, Sebastian; Kittelmann, Jörg; Hubbuch, Jürgen

    2017-01-10

    Precipitation of proteins is considered to be an effective purification method for proteins and has proven its potential to replace costly chromatography processes. Besides salts and polyelectrolytes, polymers, such as polyethylene glycol (PEG), are commonly used for precipitation applications under mild conditions. Process development, however, for protein precipitation steps still is based mainly on heuristic approaches and high-throughput experimentation due to a lack of understanding of the underlying mechanisms. In this work we apply quantitative structure-activity relationships (QSARs) to model two parameters, the discontinuity point m* and the β-value, that describe the complete precipitation curve of a protein under defined conditions. The generated QSAR models are sensitive to the protein type, pH, and ionic strength. It was found that the discontinuity point m* is mainly dependent on protein molecular structure properties and electrostatic surface properties, whereas the β-value is influenced by the variance in electrostatics and hydrophobicity on the protein surface. The models for m* and the β-value exhibit a good correlation between observed and predicted data with a coefficient of determination of R(2)≥0.90 and, hence, are able to accurately predict precipitation curves for proteins. The predictive capabilities were demonstrated for a set of combinations of protein type, pH, and ionic strength not included in the generation of the models and good agreement between predicted and experimental data was achieved.

  8. Evaluating Satellite Products for Precipitation Estimation in Mountain Regions: A Case Study for Nepal

    Directory of Open Access Journals (Sweden)

    Tarendra Lakhankar

    2013-08-01

    Full Text Available Precipitation in mountain regions is often highly variable and poorly observed, limiting abilities to manage water resource challenges. Here, we evaluate remote sensing and ground station-based gridded precipitation products over Nepal against weather station precipitation observations on a monthly timescale. We find that the Tropical Rainfall Measuring Mission (TRMM 3B-43 precipitation product exhibits little mean bias and reasonable skill in giving precipitation over Nepal. Compared to station observations, the TRMM precipitation product showed an overall Nash-Sutcliffe efficiency of 0.49, which is similar to the skill of the gridded station-based product Asian Precipitation-Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE. The other satellite precipitation products considered (Global Satellite Mapping of Precipitation (GSMaP, the Climate Prediction Center Morphing technique (CMORPH, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS were less skillful, as judged by Nash-Sutcliffe efficiency, and, on average, substantially underestimated precipitation compared to station observations, despite their, in some cases, higher nominal spatial resolution compared to TRMM. None of the products fully captured the dependence of mean precipitation on elevation seen in the station observations. Overall, the TRMM product is promising for use in water resources applications.

  9. Precipitation estimation using L-band and C-band soil moisture retrievals

    Science.gov (United States)

    Koster, Randal D.; Brocca, Luca; Crow, Wade T.; Burgin, Mariko S.; De Lannoy, Gabrielle J. M.

    2016-09-01

    An established methodology for estimating precipitation amounts from satellite-based soil moisture retrievals is applied to L-band products from the Soil Moisture Active Passive (SMAP) and Soil Moisture and Ocean Salinity (SMOS) satellite missions and to a C-band product from the Advanced Scatterometer (ASCAT) mission. The precipitation estimates so obtained are evaluated against in situ (gauge-based) precipitation observations from across the globe. The precipitation estimation skill achieved using the L-band SMAP and SMOS data sets is higher than that obtained with the C-band product, as might be expected given that L-band is sensitive to a thicker layer of soil and thereby provides more information on the response of soil moisture to precipitation. The square of the correlation coefficient between the SMAP-based precipitation estimates and the observations (for aggregations to ˜100 km and 5 days) is on average about 0.6 in areas of high rain gauge density. Satellite missions specifically designed to monitor soil moisture thus do provide significant information on precipitation variability, information that could contribute to efforts in global precipitation estimation.

  10. Quantitative estimation of the impact of precipitation and human activities on runoff change of the Huangfuchuan River Basin%皇甫川流域降水量和人类活动对径流量变化影响的定量评估

    Institute of Scientific and Technical Information of China (English)

    WANG Suiji; YAN Yunxia; YAN Ming; ZHAO Xiaokun

    2012-01-01

    The runoff of some rivers in the world especially in the arid and semi-arid areas has decreased remarkably with global or regional climate change and enhanced human activities.The runoff decrease in the arid and semi-arid areas of northern China has brought severe problems in livelihoods and ecology.To reveal the variation characteristics,trends of runoff and their influencing factors have been important scientific issues for drainage basin management.The objective of this study was to analyze the variation trends of the runoff and quantitatively assess the contributions of precipitation and human activities to the runoff change in the Huangfuchuan River Basin based on the measured data in 1960-2008.Two inflection points (turning years) of 1979 and 1998 for the accumulative runoff change,and one inflection point of 1979 for the accumulative precipitation change were identified using the methods of accumulative anomaly analysis.The linear relationships between year and accumulative runoff in 1960-1979,1980-1997 and 1998-2008 and between year and accumulative precipitation in 1960-1979 and 1980-2008 were fitted.A new method of slope change ratio of accumulative quantity (SCRAQ) was put forward and used in this study to calculate the contributions of different factors to the runoff change.Taking 1960-1979 as the base period,the contribution rate of the precipitation and human activities to the decreased runoff was 36.43% and 63.57% in 1980-1997,and 16.81% and 83.19% in 1998-2008,respectively.The results will play an important role in the drainage basin management.Moreover,the new method of SCRAQ can be applied in the quantitative evaluation of runoff change and impacts by different factors in the river basin of arid and semi-arid areas.%@@

  11. Daily quantitative precipitation forecasts based on the analogue method: Improvements and application to a French large river basin

    Science.gov (United States)

    Ben Daoud, Aurélien; Sauquet, Eric; Bontron, Guillaume; Obled, Charles; Lang, Michel

    2016-03-01

    This paper presents some improvements of a probabilistic quantitative precipitation forecasting method based on analogues, formerly developed on small basins located in South-Eastern France. The scope is extended to large scale basins mainly influenced by frontal systems, considering a case study area related to the Saône river, a large basin in eastern France. For a given target situation, this method consists in searching for the most similar situations observed in a historical meteorological archive. Precipitation amounts observed during analogous situations are then collected to derive an empirical predictive distribution function, i.e. the probabilistic estimation of the precipitation amount expected for the target day. The former version of this forecasting method (Bontron, 2004) has been improved by introducing two innovative variables: temperature, that allows taking seasonal effects into account and vertical velocity, which enables a better characterization of the vertical atmospheric motion. The new algorithm is first applied in a perfect prognosis context (target situations come from a meteorological reanalysis) and then in an operational forecasting context (target situations come from weather forecasts) for a three years period. Results show that this approach yields useful forecasts, with a lower false alarm rate and improved performances from the present day D to day D + 2.

  12. Sensitivity of quantitative groundwater recharge estimates to volumetric and distribution uncertainty in rainfall forcing products

    Science.gov (United States)

    Werner, Micha; Westerhoff, Rogier; Moore, Catherine

    2017-04-01

    Quantitative estimates of recharge due to precipitation excess are an important input to determining sustainable abstraction of groundwater resources, as well providing one of the boundary conditions required for numerical groundwater modelling. Simple water balance models are widely applied for calculating recharge. In these models, precipitation is partitioned between different processes and stores; including surface runoff and infiltration, storage in the unsaturated zone, evaporation, capillary processes, and recharge to groundwater. Clearly the estimation of recharge amounts will depend on the estimation of precipitation volumes, which may vary, depending on the source of precipitation data used. However, the partitioning between the different processes is in many cases governed by (variable) intensity thresholds. This means that the estimates of recharge will not only be sensitive to input parameters such as soil type, texture, land use, potential evaporation; but mainly to the precipitation volume and intensity distribution. In this paper we explore the sensitivity of recharge estimates due to difference in precipitation volumes and intensity distribution in the rainfall forcing over the Canterbury region in New Zealand. We compare recharge rates and volumes using a simple water balance model that is forced using rainfall and evaporation data from; the NIWA Virtual Climate Station Network (VCSN) data (which is considered as the reference dataset); the ERA-Interim/WATCH dataset at 0.25 degrees and 0.5 degrees resolution; the TRMM-3B42 dataset; the CHIRPS dataset; and the recently releases MSWEP dataset. Recharge rates are calculated at a daily time step over the 14 year period from the 2000 to 2013 for the full Canterbury region, as well as at eight selected points distributed over the region. Lysimeter data with observed estimates of recharge are available at four of these points, as well as recharge estimates from the NGRM model, an independent model

  13. Improving High-resolution Spatial Estimates of Precipitation in the Equatorial Americas

    Science.gov (United States)

    Verdin, A.; Rajagopalan, B.; Funk, C. C.

    2013-12-01

    Drought and flood management practices require accurate estimates of precipitation in space and time. However, data is sparse in regions with complicated terrain (such as the Equatorial Americas), often in valleys (where people farm), and of poor quality. Consequently, extreme precipitation events are poorly represented. Satellite-derived rainfall data is an attractive alternative in such regions and is being widely used, though it too suffers from problems such as underestimation of extreme events (due to its dependency on retrieval algorithms) and the indirect relationship between satellite radiation observations and precipitation intensities. Thus, it seems appropriate to blend satellite-derived rainfall data of extensive spatial coverage with rain gauge data in order to provide a more robust estimate of precipitation. To this end, in this research we offer three techniques to blend rain gauge data and the Climate Hazards group InfraRed Precipitation (CHIRP) satellite-derived precipitation estimate for Central America and Colombia. In the first two methods, the gauge data is assigned to the closest CHIRP grid point, where the error is defined as r = Yobs - Ysat. The spatial structure of r is then modeled using physiographic information (Easting, Northing, and Elevation) by two methods (i) a traditional Cokriging approach whose variogram is calculated in Euclidean space and (ii) a nonparametric method based on local polynomial functional estimation. The models are used to estimate r at all grid points, which is then added to the CHIRP, thus creating an improved satellite estimate. We demonstrate these methods by applying them to pentadal and monthly total precipitation fields during 2009. The models' predictive abilities and their ability to capture extremes are investigated. These blending methods significantly improve upon the satellite-derived estimates and are also competitive in their ability to capture extreme precipitation. The above methods assume

  14. Analysis of long term trends of precipitation estimates acquired using radar network in Turkey

    Science.gov (United States)

    Tugrul Yilmaz, M.; Yucel, Ismail; Kamil Yilmaz, Koray

    2016-04-01

    Precipitation estimates, a vital input in many hydrological and agricultural studies, can be obtained using many different platforms (ground station-, radar-, model-, satellite-based). Satellite- and model-based estimates are spatially continuous datasets, however they lack the high resolution information many applications often require. Station-based values are actual precipitation observations, however they suffer from their nature that they are point data. These datasets may be interpolated however such end-products may have large errors over remote locations with different climate/topography/etc than the areas stations are installed. Radars have the particular advantage of having high spatial resolution information over land even though accuracy of radar-based precipitation estimates depends on the Z-R relationship, mountain blockage, target distance from the radar, spurious echoes resulting from anomalous propagation of the radar beam, bright band contamination and ground clutter. A viable method to obtain spatially and temporally high resolution consistent precipitation information is merging radar and station data to take advantage of each retrieval platform. An optimally merged product is particularly important in Turkey where complex topography exerts strong controls on the precipitation regime and in turn hampers observation efforts. There are currently 10 (additional 7 are planned) weather radars over Turkey obtaining precipitation information since 2007. This study aims to optimally merge radar precipitation data with station based observations to introduce a station-radar blended precipitation product. This study was supported by TUBITAK fund # 114Y676.

  15. nowCOAST's Map Service for NOAA NWS NDFD Gridded Forecasts of 6-Hr Quantitative Precipitation Amount (inches) (Time Offsets)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Map Information: This nowCOAST time-offsets map service provides maps depicting the NWS 6-hr quantitative precipitation [amount] forecasts (QPF) from the National...

  16. Application of the LEPS technique for Quantitative Precipitation Forecasting (QPF in Southern Italy: a preliminary study

    Directory of Open Access Journals (Sweden)

    S. Federico

    2006-01-01

    Full Text Available This paper reports preliminary results for a Limited area model Ensemble Prediction System (LEPS, based on RAMS (Regional Atmospheric Modelling System, for eight case studies of moderate-intense precipitation over Calabria, the southernmost tip of the Italian peninsula. LEPS aims to transfer the benefits of a probabilistic forecast from global to regional scales in countries where local orographic forcing is a key factor to force convection. To accomplish this task and to limit computational time in an operational implementation of LEPS, we perform a cluster analysis of ECMWF-EPS runs. Starting from the 51 members that form the ECMWF-EPS we generate five clusters. For each cluster a representative member is selected and used to provide initial and dynamic boundary conditions to RAMS, whose integrations generate LEPS. RAMS runs have 12-km horizontal resolution. To analyze the impact of enhanced horizontal resolution on quantitative precipitation forecasts, LEPS forecasts are compared to a full Brute Force (BF ensemble. This ensemble is based on RAMS, has 36 km horizontal resolution and is generated by 51 members, nested in each ECMWF-EPS member. LEPS and BF results are compared subjectively and by objective scores. Subjective analysis is based on precipitation and probability maps of case studies whereas objective analysis is made by deterministic and probabilistic scores. Scores and maps are calculated by comparing ensemble precipitation forecasts against reports from the Calabria regional raingauge network. Results show that LEPS provided better rainfall predictions than BF for all case studies selected. This strongly suggests the importance of the enhanced horizontal resolution, compared to ensemble population, for Calabria for these cases. To further explore the impact of local physiographic features on QPF (Quantitative Precipitation Forecasting, LEPS results are also compared with a 6-km horizontal resolution deterministic forecast. Due

  17. A comparison of total precipitation values estimated from measurements and a 1D cloud model

    Directory of Open Access Journals (Sweden)

    Z. Aslan

    Full Text Available The purpose of this study is to establish a relation between observed total precipitation values and estimations from a one-dimensional diagnostic cloud model. Total precipitation values estimated from maximum liquid water content, maximum vertical velocity, cloud top height, and temperature excess are also used to provide an equation for the total precipitation prediction. Data for this study were collected in Istanbul during the autumns of 1987 and 1988. The statistical models are developed with multiple regression technique and then comparatively verified with independent data for 1990. The multiple regression coefficients are in the range of 75% to 80% in the statistical models. Results of the test showed that total precipitation values estimated from the above techniques are in good agreement, with correlation coefficient between 40% and 46% based on test data for 1990.

  18. Estimating Cs-137 fallout inventories in Iceland from precipitation data

    Energy Technology Data Exchange (ETDEWEB)

    Palsson, S.E.; Sigurgeirsson, M.A.; Gudnason, K. [Icelandic Radiation Protection Inst., Reykjavik (Iceland); Arnalds, O. [Agricultural Research Inst., Reykjavik (Iceland); Howard, B.J.; Wright, S.M. [Centre for Ecology and Hydrology Merlewood, Cumbria, (United Kingdom); Palsdottir, I. [Iceland Meteorological Office, Reykjavik (Iceland)

    2002-12-01

    Iceland was identified in the Arctic Monitoring and Assessment Programme (AMAP) as one of the Arctic areas which received the most global fallout from atmospheric nuclear weapons tests, due to relatively high precipitation rates compared with much of the Arctic and sub arctic. Cs-137 in the Icelandic terrestrial ecosystem almost entirely originates from the nuclear weapons tests carried out in the atmosphere until the early sixties. Fallout was greatest in mid nineteen sixties. Additional fallout from the accident at the Chernobyl Nuclear Power Plant was relatively small. The study gave preliminary information on the spatial variation in {sup 137} Cs deposition in Iceland, especially in areas used for agriculture. The objectives of the study were (1) to measure the spatial variation of radiocaesium inventories in soils in Iceland and (2) to compare the results with different approaches to predicting {sup 137} Cs contents in soil. This quantification is a necessary first step in an evaluation of vulnerability to radiocaesium deposition in Iceland. It is anticipated that Icelandic soils could be highly vulnerable to radiocaesium due to their volcanic nature and consequent lack of illitic minerals, as has been suggested by initial chemical studies on the properties of soils in the Nordic countries. (ln)

  19. Bayesian Estimation of Precipitation from Satellite Passive Microwave Observations Using Combined Radar-Radiometer Retrievals

    Science.gov (United States)

    Grecu, Mircea; Olson, William S.

    2006-01-01

    Precipitation estimation from satellite passive microwave radiometer observations is a problem that does not have a unique solution that is insensitive to errors in the input data. Traditionally, to make this problem well posed, a priori information derived from physical models or independent, high-quality observations is incorporated into the solution. In the present study, a database of precipitation profiles and associated brightness temperatures is constructed to serve as a priori information in a passive microwave radiometer algorithm. The precipitation profiles are derived from a Tropical Rainfall Measuring Mission (TRMM) combined radar radiometer algorithm, and the brightness temperatures are TRMM Microwave Imager (TMI) observed. Because the observed brightness temperatures are consistent with those derived from a radiative transfer model embedded in the combined algorithm, the precipitation brightness temperature database is considered to be physically consistent. The database examined here is derived from the analysis of a month-long record of TRMM data that yields more than a million profiles of precipitation and associated brightness temperatures. These profiles are clustered into a tractable number of classes based on the local sea surface temperature, a radiometer-based estimate of the echo-top height (the height beyond which the reflectivity drops below 17 dBZ), and brightness temperature principal components. For each class, the mean precipitation profile, brightness temperature principal components, and probability of occurrence are determined. The precipitation brightness temperature database supports a radiometer-only algorithm that incorporates a Bayesian estimation methodology. In the Bayesian framework, precipitation estimates are weighted averages of the mean precipitation values corresponding to the classes in the database, with the weights being determined according to the similarity between the observed brightness temperature principal

  20. Changing precipitation extremes in a warming climate: A basis for design flood estimation

    Science.gov (United States)

    Wasko, Conrad; Sharma, Ashish

    2016-04-01

    The potential for increasing intensity of future rainfall events has significant implications for flooding and the design of infrastructure. However the questions of how precipitation will change in the future, how important these changes are to flooding, and how engineers incorporate these changes into hydrologic design remain as open questions. In the absence of reliable point based estimates of how precipitation will change, many studies investigate the historical relationship between rainfall intensity and temperature as a proxy for what may happen in a warmer climate. Much of the research to date has focussed on changing precipitation intensity, however, temporal and spatial patterns of precipitation are just as important. Here we link higher temperatures to changes in temporal and spatial patterns of extreme precipitation events. We show, using observed high quality precipitation records from Australia covering all major climatic zones, that storms are intensifying in both time and space resulting in a greater potential for flooding especially in urban locales around the world. Given that precipitation and antecedent conditions are changing, and, the impacts to flooding are significant, methods of incorporating these changes in catchment modelling are required. Continuous simulation offers a natural flexibility to incorporate the many correlated changes in precipitation that may occur in a future climate. An argument for such a framework using existing continuous simulation alternatives is articulated in concluding this presentation.

  1. A new methodology for pixel-quantitative precipitation nowcasting using a pyramid Lucas Kanade optical flow approach

    Science.gov (United States)

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang; Hong, Yang

    2015-10-01

    Short-term high-resolution Quantitative Precipitation Nowcasting (QPN) has important implications for navigation, flood forecasting, and other hydrological and meteorological concerns. This study proposes a new algorithm called Pixel-based QPN using the Pyramid Lucas-Kanade Optical Flow method (PPLK), which comprises three steps: employing a Pyramid Lucas-Kanade Optical Flow method (PLKOF) to estimate precipitation advection, projecting rainy clouds by considering the advection and evolution pixel by pixel, and interpolating QPN imagery based on the space-time continuum of cloud patches. The PPLK methodology was evaluated with 2338 images from the geostationary meteorological satellite Fengyun-2F (FY-2F) of China and compared with two other advection-based methods, i.e., the maximum correlation method and the Horn-Schunck Optical Flow scheme. The data sample covered all intensive observations since the launch of FY-2F, despite covering a total of only approximately 10 days. The results show that the PPLK performed better than the algorithms used for comparison, demonstrating less time expenditure, more effective cloud tracking, and improved QPN accuracy.

  2. THE QUANTITATIVE COMPONENT’S DIAGNOSIS OF THE ATMOSPHERIC PRECIPITATION CONDITION IN BAIA MARE URBAN AREA

    Directory of Open Access Journals (Sweden)

    S. ZAHARIA

    2012-12-01

    Full Text Available The atmospheric precipitation, an essential meteorological element for defining the climatic potential of a region, presents through its general and local particularities a defining influence for the evolution of the other climatic parameters, conditioning the structure of the overall geographic landscape. Their quantitative parameters sets up the regional natural setting and differentiation of water resources, soil, vegetation and fauna, in the same time influencing the majority of human activities’ aspects, through the generated impact over the agriculture, transportation, construction, for tourism etc. Especially, through the evolution of the related climatic parameters (production type, quantity, duration, frequency, intensity and their spatial and temporal fluctuations, the pluviometric extremes set out the maxim manifestation of the energy gap of the hydroclimatic hazards/risks which induce unfavourable or even damaging conditions for the human activities’ progress. Hence, the production of atmospheric precipitation surpluses conditions the triggering, or reactivation of some intense erosion processes, landslides, and last but not least, floods. Just as dangerous are the adverse amounts of precipitation or their absence on longer periods, determining the appearance of droughts, aridity phenomena, which if associated with the sharp anthropic pressure over the environment, favours the expansion of desertification, with the whole process of the arising negative effects. In this context, this paper aims to perform the diagnosis of atmospheric precipitation condition in Baia Mare urban area, through its quantitative component, in multiannual condition (1971-2007, underlining through the results of the analyzed climatic data and their interpretation, the main characteristics that define it. The data bank from Baia Mare station from the National Meteorological Administration network, representative for the chosen study area, was used. Baia

  3. Operational application and evaluation of the quantitative precipitation estimates algorithm based on the multi-radar mosaic%基于雷达组网拼图的定量降水估测算法业务应用及效果评估

    Institute of Scientific and Technical Information of China (English)

    勾亚彬; 刘黎平; 杨杰; 吴翀

    2014-01-01

    The real-time system of QPE (Quantitative Precipitation Estimates)together with the real-time evaluation based on the multi-radar mosaic has been successfully implemented in the operation at the Weather Bureau of Hangzhou in Zhejiang Prov-ince.A comprehensive evaluation is presented in this paper and meanwhile the error sources are analyzed according to the char-acteristics of vertical profile of radar reflectivity using the four different precipitation events.The system unites the basic data from the six CINRAD (China New Generation of Weather Radar)at Hangzhou,Ningbo,Zhoushan,Jinhua and Quzhou and the gauge data within Zhejiang Province which are quality-controlled in real time by the IDW (Inverse Distance Weights)meth-od,and in order to reduce systematic and local errors,both the dynamic Z-R relationship and optimal interpolation method are integrated into the system to calibrate the QPE field.From the verification and analysis of the four events,some conclusions are as follows:(1)In northwestern and the southwestern Zhejiang,if the reflectivity sources from the bright-band (the top of the cloud),it results in the overestimation (underestimation)of radar QPE of the stratiform rainfall system.(2)In the case of the coexistence of different rainfall types the use of relative uniform Z-R relationship will result in serious local overestimation and underestimation of QPE during the Meiyu front/typhoon.(3)The severe convection with the squall line and the asymme-try of typhoon are important reasons of the error in the radar QPE.And,(4)although the combination of the Z-R relationship with the optimal interpolation can effectively reduce the systematic error,some large local error remains in the QPE.%基于雷达组网实时的定量降水估测(QPE)及实时评估系统在浙江省杭州市气象局成功实现了业务应用,在评估雷达定量降水估测业务应用效果的同时,根据雷达反射率因子垂直廓线(VPR)特征,探讨分析了不

  4. Quantitative study of single molecule location estimation techniques.

    Science.gov (United States)

    Abraham, Anish V; Ram, Sripad; Chao, Jerry; Ward, E S; Ober, Raimund J

    2009-12-21

    Estimating the location of single molecules from microscopy images is a key step in many quantitative single molecule data analysis techniques. Different algorithms have been advocated for the fitting of single molecule data, particularly the nonlinear least squares and maximum likelihood estimators. Comparisons were carried out to assess the performance of these two algorithms in different scenarios. Our results show that both estimators, on average, are able to recover the true location of the single molecule in all scenarios we examined. However, in the absence of modeling inaccuracies and low noise levels, the maximum likelihood estimator is more accurate than the nonlinear least squares estimator, as measured by the standard deviations of its estimates, and attains the best possible accuracy achievable for the sets of imaging and experimental conditions that were tested. Although neither algorithm is consistently superior to the other in the presence of modeling inaccuracies or misspecifications, the maximum likelihood algorithm emerges as a robust estimator producing results with consistent accuracy across various model mismatches and misspecifications. At high noise levels, relative to the signal from the point source, neither algorithm has a clear accuracy advantage over the other. Comparisons were also carried out for two localization accuracy measures derived previously. Software packages with user-friendly graphical interfaces developed for single molecule location estimation (EstimationTool) and limit of the localization accuracy calculations (FandPLimitTool) are also discussed.

  5. Comparison between radar estimations and rain gauge precipitations in the Moldavian Plateau (Romania)

    Science.gov (United States)

    Cheval, Sorin; Burcea, Sorin; Dumitrescu, Alexandru; Antonescu, Bogdan; Bell, Aurora; Breza, Traian

    2010-05-01

    Heavy rainfall events have produced significant damages and casualties in the Moldavian Plateau (Romania) in the last decades. Such phenomena are characterized by large spatial and temporal variations, and the forecast of their occurrence is thus very challenging. This study aims to compare the radar estimations and the rain gauge measurements, in order to improve the quantitative precipitation estimation (QPE) in the area of interest. The research uses data from the WSR-98D S-band Doppler radar located in Bârnova, and from rain gauges within weather stations run by Meteo Romania (Romanian National Meteorological Administration). We have focused on daily (24 h) accumulations registered at weather stations, and the output sustains the radar calibration, fostering the hydrological modeling, including flash flood forecast. The differences between R and G were investigated based on two objectives functions -the ratio R/G (BIAS) and the Root Mean Square Factor (RMSf)- while the correlations used the Pearson scores. Considerable spatial distinctions between areas with good radar accuracy for QPE and perimeters where radar is not capable to provide robust information have been emphasized during the investigations. The validation aimed to predict the rain gauge amounts in certain spots by using the radar information and resulted adjustment parameters. It has been demonstrated that the Bârnova radar data are reliable within approx. 150 km radius, and the comparison with rain gauge measurements can foster consistently the QPE accuracy in the area. This research was completed in the framework of the EU FP6 Project HYDRATE (Hydrometeorological data resources and technologies for effective flash flood forecasting), Contract no: 037024, 2006-2009.

  6. The application of LEPS technique for Quantitative Precipitation Forecast (QPF in Southern Italy

    Directory of Open Access Journals (Sweden)

    S. Federico

    2006-01-01

    Full Text Available This paper reports preliminary results of a Limited area model Ensemble Prediction System (LEPS, based on RAMS, for eight case studies of moderate-intense precipitation over Calabria, the southernmost tip of the Italian peninsula. LEPS aims to transfer the benefits of a probabilistic forecast from global to regional scales in countries where local orographic forcing is a key factor to force convection. To accomplish this task and to limit computational time, in order to implement LEPS operational, we perform a cluster analysis of ECMWF-EPS runs. Starting from the 51 members that forms the ECMWF-EPS we generate five clusters. For each cluster a representative member is selected and used to provide initial and dynamic boundary conditions to RAMS, whose integrations generate LEPS. RAMS runs have 12 km horizontal resolution. Hereafter this ensemble will be referred also as LEPS_12L30. To analyze the impact of enhanced horizontal resolution on quantitative precipitation forecast, LEPS_12L30 forecasts are compared to a lower resolution ensemble, based on RAMS that has 50 km horizontal resolution and 51 members, nested in each ECMWF-EPS member. Hereafter this ensemble will be also referred as LEPS_50L30. LEPS_12L30 and LEPS_50L30 results were compared subjectively for all case studies but, for brevity, results are reported for two "representative" cases only. Subjective analysis is based on ensemble-mean precipitation and probability maps. Moreover, a short summary of objective scores. Maps and scores are evaluated against reports of Calabria regional raingauges network. Results show better LEPS_12L30 performance compared to LEPS_50L30. This is obtained for all case studies selected and strongly suggests the importance of the enhanced horizontal resolution, compared to ensemble population, for Calabria, at least for set-ups and case studies selected in this work.

  7. Development of Deep Learning Based Data Fusion Approach for Accurate Rainfall Estimation Using Ground Radar and Satellite Precipitation Products

    Science.gov (United States)

    Chen, H.; Chandra, C. V.; Tan, H.; Cifelli, R.; Xie, P.

    2016-12-01

    Rainfall estimation based on onboard satellite measurements has been an important topic in satellite meteorology for decades. A number of precipitation products at multiple time and space scales have been developed based upon satellite observations. For example, NOAA Climate Prediction Center has developed a morphing technique (i.e., CMORPH) to produce global precipitation products by combining existing space based rainfall estimates. The CMORPH products are essentially derived based on geostationary satellite IR brightness temperature information and retrievals from passive microwave measurements (Joyce et al. 2004). Although the space-based precipitation products provide an excellent tool for regional and global hydrologic and climate studies as well as improved situational awareness for operational forecasts, its accuracy is limited due to the sampling limitations, particularly for extreme events such as very light and/or heavy rain. On the other hand, ground-based radar is more mature science for quantitative precipitation estimation (QPE), especially after the implementation of dual-polarization technique and further enhanced by urban scale radar networks. Therefore, ground radars are often critical for providing local scale rainfall estimation and a "heads-up" for operational forecasters to issue watches and warnings as well as validation of various space measurements and products. The CASA DFW QPE system, which is based on dual-polarization X-band CASA radars and a local S-band WSR-88DP radar, has demonstrated its excellent performance during several years of operation in a variety of precipitation regimes. The real-time CASA DFW QPE products are used extensively for localized hydrometeorological applications such as urban flash flood forecasting. In this paper, a neural network based data fusion mechanism is introduced to improve the satellite-based CMORPH precipitation product by taking into account the ground radar measurements. A deep learning system is

  8. A Short-Range Quantitative Precipitation Forecast Algorithm Using Back-Propagation Neural Network Approach

    Institute of Scientific and Technical Information of China (English)

    FENG Yerong; David H.KITZMILLER

    2006-01-01

    A back-propagation neural network (BPNN) was used to establish relationships between the shortrange (0-3-h) rainfall and the predictors ranging from extrapolative forecasts of radar reflectivity, satelliteestimated cloud-top temperature, lightning strike rates, and Nested Grid Model (NGM) outputs. Quantitative precipitation forecasts (QPF) and the probabilities of categorical precipitation were obtained.Results of the BPNN algorithm were compared to the results obtained from the multiple linear regression algorithm for an independent dataset from the 1999 warm season over the continental United States. A sample forecast was made over the southeastern United States. Results showed that the BPNN categorical rainfall forecasts agreed well with Stage Ⅲ observations in terms of the size and shape of the area of rainfall. The BPNN tended to over-forecast the spatial extent of heavier rainfall amounts, but the positioning of the areas with rainfall ≥25.4 mm was still generally accurate. It appeared that the BPNN and linear regression approaches produce forecasts of very similar quality, although in some respects BPNN slightly outperformed the regression.

  9. Multimodel SuperEnsemble technique for quantitative precipitation forecasts in Piemonte region

    Directory of Open Access Journals (Sweden)

    D. Cane

    2010-02-01

    Full Text Available The Multimodel SuperEnsemble technique is a powerful post-processing method for the estimation of weather forecast parameters reducing direct model output errors. It has been applied to real time NWP, TRMM-SSM/I based multi-analysis, Seasonal Climate Forecasts and Hurricane Forecasts. The novelty of this approach lies in the methodology, which differs from ensemble analysis techniques used elsewhere.

    Several model outputs are put together with adequate weights to obtain a combined estimation of meteorological parameters. Weights are calculated by least-square minimization of the difference between the model and the observed field during a so-called training period. Although it can be applied successfully on the continuous parameters like temperature, humidity, wind speed and mean sea level pressure, the Multimodel SuperEnsemble gives good results also when applied on the precipitation, a parameter quite difficult to handle with standard post-processing methods. Here we present our methodology for the Multimodel precipitation forecasts, involving a new accurate statistical method for bias correction and a wide spectrum of results over Piemonte very dense non-GTS weather station network.

  10. Improved global high resolution precipitation estimation using multi-satellite multi-spectral information

    Science.gov (United States)

    Behrangi, Ali

    In respond to the community demands, combining microwave (MW) and infrared (IR) estimates of precipitation has been an active area of research since past two decades. The anticipated launching of NASA's Global Precipitation Measurement (GPM) mission and the increasing number of spectral bands in recently launched geostationary platforms will provide greater opportunities for investigating new approaches to combine multi-source information towards improved global high resolution precipitation retrievals. After years of the communities' efforts the limitations of the existing techniques are: (1) Drawbacks of IR-only techniques to capture warm rainfall and screen out no-rain thin cirrus clouds; (2) Grid-box- only dependency of many algorithms with not much effort to capture the cloud textures whether in local or cloud patch scale; (3) Assumption of indirect relationship between rain rate and cloud-top temperature that force high intensity precipitation to any cold cloud; (4) Neglecting the dynamics and evolution of cloud in time; (5) Inconsistent combination of MW and IR-based precipitation estimations due to the combination strategies and as a result of above described shortcomings. This PhD dissertation attempts to improve the combination of data from Geostationary Earth Orbit (GEO) and Low-Earth Orbit (LEO) satellites in manners that will allow consistent high resolution integration of the more accurate precipitation estimates, directly observed through LEO's PMW sensors, into the short-term cloud evolution process, which can be inferred from GEO images. A set of novel approaches are introduced to cope with the listed limitations and is consist of the following four consecutive components: (1) starting with the GEO part and by using an artificial-neural network based method it is demonstrated that inclusion of multi-spectral data can ameliorate existing problems associated with IR-only precipitating retrievals; (2) through development of Precipitation Estimation

  11. Bias adjustment of satellite-based precipitation estimation using gauge observations: A case study in Chile

    Science.gov (United States)

    Yang, Zhongwen; Hsu, Kuolin; Sorooshian, Soroosh; Xu, Xinyi; Braithwaite, Dan; Verbist, Koen M. J.

    2016-04-01

    Satellite-based precipitation estimates (SPEs) are promising alternative precipitation data for climatic and hydrological applications, especially for regions where ground-based observations are limited. However, existing satellite-based rainfall estimations are subject to systematic biases. This study aims to adjust the biases in the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) rainfall data over Chile, using gauge observations as reference. A novel bias adjustment framework, termed QM-GW, is proposed based on the nonparametric quantile mapping approach and a Gaussian weighting interpolation scheme. The PERSIANN-CCS precipitation estimates (daily, 0.04°×0.04°) over Chile are adjusted for the period of 2009-2014. The historical data (satellite and gauge) for 2009-2013 are used to calibrate the methodology; nonparametric cumulative distribution functions of satellite and gauge observations are estimated at every 1°×1° box region. One year (2014) of gauge data was used for validation. The results show that the biases of the PERSIANN-CCS precipitation data are effectively reduced. The spatial patterns of adjusted satellite rainfall show high consistency to the gauge observations, with reduced root-mean-square errors and mean biases. The systematic biases of the PERSIANN-CCS precipitation time series, at both monthly and daily scales, are removed. The extended validation also verifies that the proposed approach can be applied to adjust SPEs into the future, without further need for ground-based measurements. This study serves as a valuable reference for the bias adjustment of existing SPEs using gauge observations worldwide.

  12. Bayesian parameter estimation in spectral quantitative photoacoustic tomography

    Science.gov (United States)

    Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja

    2016-03-01

    Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.

  13. Estimation of probable maximum precipitation at the Kielce Upland (Poland) using meteorological method

    Science.gov (United States)

    Suligowski, Roman

    2014-05-01

    Probable Maximum Precipitation based upon the physical mechanisms of precipitation formation at the Kielce Upland. This estimation stems from meteorological analysis of extremely high precipitation events, which occurred in the area between 1961 and 2007 causing serious flooding from rivers that drain the entire Kielce Upland. Meteorological situation has been assessed drawing on the synoptic maps, baric topography charts, satellite and radar images as well as the results of meteorological observations derived from surface weather observation stations. Most significant elements of this research include the comparison between distinctive synoptic situations over Europe and subsequent determination of typical rainfall generating mechanism. This allows the author to identify the source areas of air masses responsible for extremely high precipitation at the Kielce Upland. Analysis of the meteorological situations showed, that the source areas for humid air masses which cause the largest rainfalls at the Kielce Upland are the area of northern Adriatic Sea and the north-eastern coast of the Black Sea. Flood hazard at the Kielce Upland catchments was triggered by daily precipitation of over 60 mm. The highest representative dew point temperature in source areas of warm air masses (these responsible for high precipitation at the Kielce Upland) exceeded 20 degrees Celsius with a maximum of 24.9 degrees Celsius while precipitable water amounted to 80 mm. The value of precipitable water is also used for computation of factors featuring the system, namely the mass transformation factor and the system effectiveness factor. The mass transformation factor is computed based on precipitable water in the feeding mass and precipitable water in the source area. The system effectiveness factor (as the indicator of the maximum inflow velocity and the maximum velocity in the zone of front or ascending currents, forced by orography) is computed from the quotient of precipitable water in

  14. The scavenging of air pollutants by precipitation, and its estimation with the aid of weather radar

    Science.gov (United States)

    Jylha, Kirsti Tellervo

    2000-09-01

    Precipitation cleanses the air by capturing airborne pollutants and depositing them onto the ground. The efficiency of this process may be expressed by the fractional depletion rate of pollutant concentrations in the air, designated as the scavenging coefficient. It depends on the size distribution of the raindrops and snow crystals and is thereby related to quantities estimated by weather radar, namely, the radar reflectivity factor and the precipitation rate. On the other hand, there are no universal relationships between the scavenging coefficient and these two quantities; the relationships vary depending on the properties of the precipitation and pollutants. In the present thesis, a few estimates for them were derived theoretically and empirically, using in the latter case observations made in Finland either after the Chernobyl nuclear accident or during a wintertime case study near a coal-fired power plant. The greatest advantage in the use of weather radar in assessing precipitation scavenging arises from the fact that radar estimates the spatial distributions of precipitation in real time with a good spatial and temporal resolution. Radar software usually used to create displays of the precipitation rate can easily be modified to show distributions of the scavenging coefficient. Such images can provide valuable information about the areas where a substantial portion of the pollutants is deposited onto the ground or, alternatively, remains airborne. Based on the movement of the precipitation areas, it is also possible to make short-term forecasts of those areas most likely to be exposed to wet deposition. A network of radars may hence form an important part of a real-time monitoring and warning system that can be immediately effective in the event of an accidental releases of hazardous materials into the air.

  15. Potential Utility of the Real-Time TMPA-RT Precipitation Estimates in Streamflow Prediction

    Science.gov (United States)

    Su, Fengge; Gao, Huilin; Huffman, George J.; Lettenmaier, Dennis P.

    2010-01-01

    We investigate the potential utility of the real-time Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA-RT) data for streamflow prediction, both through direct comparisons of TMPA-RT estimates with a gridded gauge product, and through evaluation of streamflow simulations over four tributaries of La Plata Basin (LPB) in South America using the two precipitation products. Our assessments indicate that the relative accuracy and the hydrologic performance of TMPA-RT-based streamflow simulations generally improved after February 2005. The improvements in TMPA-RT since 2005 are closely related to upgrades in the TMPA-RT algorithm in early February, 2005 which include use of additional microwave sensors (AMSR-E and AMSU-B) and implementation of different calibration schemes. Our work suggests considerable potential for hydrologic prediction using purely satellite-derived precipitation estimates (no adjustments by in situ gauges) in parts of the globe where in situ observations are sparse.

  16. Validating quantitative precipitation forecast for the Flood Meteorological Office, Patna region during 2011–2014

    Indian Academy of Sciences (India)

    R K Giri; Jagabandhu Panda; Sudhansu S Rath; Ravindra Kumar

    2016-06-01

    In order to issue an accurate warning for flood, a better or appropriate quantitative forecasting of precipitationis required. In view of this, the present study intends to validate the quantitative precipitationforecast (QPF) issued during southwest monsoon season for six river catchments (basin) under theflood meteorological office, Patna region. The forecast is analysed statistically by computing various skillscores of six different precipitation ranges during the years 2011–2014. The analysis of QPF validationindicates that the multi-model ensemble (MME) based forecasting is more reliable in the precipitationranges of 1–10 and 11–25 mm. However, the reliability decreases for higher ranges of rainfall and also forthe lowest range, i.e., below 1 mm. In order to testify synoptic analogue method based MME forecastingfor QPF during an extreme weather event, a case study of tropical cyclone Phailin is performed. It isrealized that in case of extreme events like cyclonic storms, the MME forecasting is qualitatively usefulfor issue of warning for the occurrence of floods, though it may not be reliable for the QPF. However,QPF may be improved using satellite and radar products.

  17. Estimating the vertical structure of intense Mediterranean precipitation using two X-band weather radar systems

    NARCIS (Netherlands)

    Berne, A.D.; Delrieu, G.; Andrieu, H.

    2005-01-01

    The present study aims at a preliminary approach of multiradar compositing applied to the estimation of the vertical structure of precipitation¿an important issue for radar rainfall measurement and prediction. During the HYDROMET Integrated Radar Experiment (HIRE¿98), the vertical profile of

  18. Estimation of regional intensity-duration-frequency curves for extreme precipitation

    DEFF Research Database (Denmark)

    Madsen, Henrik; Mikkelsen, Peter Steen; Rosbjerg, Dan;

    1998-01-01

    Regional estimation of extreme precipitation from a high resolution rain gauge network in Denmark is considered. The applied extreme value model is based on the partial duration series (PDS) approach in which all events above a certain threshold level are modelled. For a preliminary assessment...

  19. Effects of the Forecasting Methods, Precipitation Character, and Satellite Resolution on the Predictability of Short-Term Quantitative Precipitation Nowcasting (QPN from a Geostationary Satellite.

    Directory of Open Access Journals (Sweden)

    Yu Liu

    Full Text Available The prediction of the short-term quantitative precipitation nowcasting (QPN from consecutive gestational satellite images has important implications for hydro-meteorological modeling and forecasting. However, the systematic analysis of the predictability of QPN is limited. The objective of this study is to evaluate effects of the forecasting model, precipitation character, and satellite resolution on the predictability of QPN using images of a Chinese geostationary meteorological satellite Fengyun-2F (FY-2F which covered all intensive observation since its launch despite of only a total of approximately 10 days. In the first step, three methods were compared to evaluate the performance of the QPN methods: a pixel-based QPN using the maximum correlation method (PMC; the Horn-Schunck optical-flow scheme (PHS; and the Pyramid Lucas-Kanade Optical Flow method (PPLK, which is newly proposed here. Subsequently, the effect of the precipitation systems was indicated by 2338 imageries of 8 precipitation periods. Then, the resolution dependence was demonstrated by analyzing the QPN with six spatial resolutions (0.1atial, 0.3a, 0.4atial rand 0.6. The results show that the PPLK improves the predictability of QPN with better performance than the other comparison methods. The predictability of the QPN is significantly determined by the precipitation system, and a coarse spatial resolution of the satellite reduces the predictability of QPN.

  20. Effects of the Forecasting Methods, Precipitation Character, and Satellite Resolution on the Predictability of Short-Term Quantitative Precipitation Nowcasting (QPN) from a Geostationary Satellite.

    Science.gov (United States)

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang; Ji, Wei

    2015-01-01

    The prediction of the short-term quantitative precipitation nowcasting (QPN) from consecutive gestational satellite images has important implications for hydro-meteorological modeling and forecasting. However, the systematic analysis of the predictability of QPN is limited. The objective of this study is to evaluate effects of the forecasting model, precipitation character, and satellite resolution on the predictability of QPN using images of a Chinese geostationary meteorological satellite Fengyun-2F (FY-2F) which covered all intensive observation since its launch despite of only a total of approximately 10 days. In the first step, three methods were compared to evaluate the performance of the QPN methods: a pixel-based QPN using the maximum correlation method (PMC); the Horn-Schunck optical-flow scheme (PHS); and the Pyramid Lucas-Kanade Optical Flow method (PPLK), which is newly proposed here. Subsequently, the effect of the precipitation systems was indicated by 2338 imageries of 8 precipitation periods. Then, the resolution dependence was demonstrated by analyzing the QPN with six spatial resolutions (0.1atial, 0.3a, 0.4atial rand 0.6). The results show that the PPLK improves the predictability of QPN with better performance than the other comparison methods. The predictability of the QPN is significantly determined by the precipitation system, and a coarse spatial resolution of the satellite reduces the predictability of QPN.

  1. Energy distribution of precipitating electrons estimated from optical and cosmic noise absorption measurements

    Directory of Open Access Journals (Sweden)

    H. Mori

    2004-04-01

    Full Text Available This study is a statistical analysis on energy distribution of precipitating electrons, based on CNA (cosmic noise absorption data obtained from the 256-element imaging riometer in Poker Flat, Alaska (65.11° N, 147.42° W, and optical data measured with an MSP (Meridian Scanning Photometer over 79 days during the winter periods from 1996 to 1998. On the assumption that energy distributions of precipitating electrons represent Maxwellian distributions, CNA is estimated based on the observation data of auroral 427.8-nm and 630.0-nm emissions, as well as the average atmospheric model, and compared with the actual observation data. Although the observation data have a broad distribution, they show systematically larger CNA than the model estimate. CNA determination using kappa or double Maxwellian distributions, instead of Maxwellian distributions, better explains the distribution of observed CNA data. Kappa distributions represent a typical energy distribution of electrons in the plasma sheet of the magnetosphere, the source region of precipitating electrons. Pure kappas are more likely during quiet times – and quiet times are more likely than active times. This result suggests that the energy distribution of precipitating electrons reflects the energy distribution of electrons in the plasma sheet.

    Key words. Ionosphere (auroral ionosphere; particle precipitation; polar ionosphere

  2. Exploration of a Dynamic Merging Scheme for Precipitation Estimation over a Small Urban Catchment

    Science.gov (United States)

    Al-Azerji, Sherien; Rico-Ramirez, Miguel, ,, Dr.; Han, Dawei, ,, Prof.

    2016-04-01

    The accuracy of quantitative precipitation estimation is of significant importance for urban areas due to the potentially damaging consequences that can result from pluvial flooding. Improved accuracy could be accomplished by merging rain gauge measurements with weather radar data through different merging methods. Several factors may affect the accuracy of the merged data, and the gauge density used for merging is one of the most important. However, if there are no gauges inside the research area, then a gauge network outside the research area can be used for the merging. Generally speaking, the denser the rain gauge network is, the better the merging results that can be achieved. However, in practice, the rain gauge network around the research area is fixed, and the research question is about the optimal merging area. The hypothesis is that if the merging area is too small, there are fewer gauges for merging and thus the result would be poor. If the merging area is too large, gauges far away from the research area can be included in merging. However, due to their large distances, those gauges far away from the research area provide little relevant information to the study and may even introduce noise in merging. Therefore, an optimal merging area that produces the best merged rainfall estimation in the research area could exist. To test this hypothesis, the distance from the centre of the research area and the number of merging gauges around the research area were gradually increased and merging with a new domain of radar data was then performed. The performance of the new merging scheme was compared with a gridded interpolated rainfall from four experimental rain gauges installed inside the research area for validation. The result of this analysis shows that there is indeed an optimum distance from the centre of research area and consequently an optimum number of rain gauges that produce the best merged rainfall data inside the research area. This study is of

  3. Principles of Quantitative Estimation of the Chaperone-Like Activity

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Molecular chaperones are able to interact with unfolded states of the protein molecule preventing their aggregation and facilitating folding of the polypeptide chain into the native structure. An understanding of the mechanism of protein aggregation is required to estimate the efficiency of action of chaperones in the test-systems based on the suppression of aggregation of protein substrates. The kinetic regimes of aggregation of proteins are discussed. The analysis of the aggregation kinetics of proteins shows that after passing the lag phase, aggregation follows, as a rule, first order kinetics. The quantitative characterization methods of the ability of chaperones to prevent aggregation of protein substrates have been elaborated.

  4. Quantitative Estimation of Transmitted and Reflected Lamb Waves at Discontinuity

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hyung Jin; Sohn, Hoon [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2010-08-15

    For the application of Lamb wave to structural health monitoring(SHM), understanding its physical characteristic and interaction between Lamb wave and defect of the host structure is an important issue. In this study, reflected, transmitted and mode converted Lamb waves at discontinuity of a plate structure were simulated and the amplitude ratios are calculated theoretically using Modal decomposition method. The predicted results were verified comparing with finite element method(FEM) and experimental results simulating attached PZTs. The result shows that the theoretical prediction is close to the FEM and the experimental verification. Moreover, quantitative estimation method was suggested using amplitude ratio of Lamb wave at discontinuity

  5. Analyzing Spatial and Temporal Variation in Precipitation Estimates in a Coupled Model

    Science.gov (United States)

    Tomkins, C. D.; Springer, E. P.; Costigan, K. R.

    2001-12-01

    Integrated modeling efforts at the Los Alamos National Laboratory aim to simulate the hydrologic cycle and study the impacts of climate variability and land use changes on water resources and ecosystem function at the regional scale. The integrated model couples three existing models independently responsible for addressing the atmospheric, land surface, and ground water components: the Regional Atmospheric Model System (RAMS), the Los Alamos Distributed Hydrologic System (LADHS), and the Finite Element and Heat Mass (FEHM). The upper Rio Grande Basin, extending 92,000 km2 over northern New Mexico and southern Colorado, serves as the test site for this model. RAMS uses nested grids to simulate meteorological variables, with the smallest grid over the Rio Grande having 5-km horizontal grid spacing. As LADHS grid spacing is 100 m, a downscaling approach is needed to estimate meteorological variables from the 5km RAMS grid for input into LADHS. This study presents daily and cumulative precipitation predictions, in the month of October for water year 1993, and an approach to compare LADHS downscaled precipitation to RAMS-simulated precipitation. The downscaling algorithm is based on kriging, using topography as a covariate to distribute the precipitation and thereby incorporating the topographical resolution achieved at the 100m-grid resolution in LADHS. The results of the downscaling are analyzed in terms of the level of variance introduced into the model, mean simulated precipitation, and the correlation between the LADHS and RAMS estimates. Previous work presented a comparison of RAMS-simulated and observed precipitation recorded at COOP and SNOTEL sites. The effects of downscaling the RAMS precipitation were evaluated using Spearman and linear correlations and by examining the variance of both populations. The study focuses on determining how the downscaling changes the distribution of precipitation compared to the RAMS estimates. Spearman correlations computed for

  6. Evaluation of satellite-based precipitation estimates in winter season using an object-based approach

    Science.gov (United States)

    Li, J.; Hsu, K.; AghaKouchak, A.; Sorooshian, S.

    2012-12-01

    Verification has become an integral component of satellite precipitation algorithms and products. A number of object-based verification methods have been proposed to provide diagnostic information regarding the precipitation products' ability to capture the spatial pattern, intensity, and placement of precipitation. However, most object-based methods are not capable of investigating precipitation objects at the storm-scale. In this study, an image processing approach known as watershed segmentation was adopted to detect the storm-scale rainfall objects. Then, a fuzzy logic-based technique was utilized to diagnose and analyze storm-scale object attributes, including centroid distance, area ratio, intersection area ratio and orientation angle difference. Three verification metrics (i.e., false alarm ratio, missing ratio and overall membership score) were generated for validation and verification. Three satellite-based precipitation products, including PERSIANN, CMORPH, 3B42RT, were evaluated against NOAA stage IV MPE multi-sensor composite rain analysis at 0.25° by 0.25° on a daily scale in the winter season of 2010 over the contiguous United States. Winter season is dominated by frontal systems which usually have larger area coverage. All three products and the stage IV observation tend to find large size storm objects. With respect to the evaluation attributes, PERSIANN tends to obtain larger area ratio and consequently has larger centroid distance to the stage IV observations, while 3B42RT are found to be closer to the stage IV for the object size. All evaluation products give small orientation angle differences but vary significantly for the missing ratio and false alarm ratio. This implies that satellite estimates can fail to detect storms in winter. The overall membership scores are close for all three different products which indicate that all three satellite-based precipitation products perform well for capturing the spatial and geometric characteristics of

  7. South African Weather Service operational satellite based precipitation estimation technique: applications and improvements

    Directory of Open Access Journals (Sweden)

    E. de Coning

    2010-11-01

    Full Text Available Extreme weather related to heavy or more frequent precipitation events seem to be a likely possibility for the future of our planet. While precipitation measurements can be done by means of rain gauges, the obvious disadvantages of point measurements are driving meteorologists towards remotely sensed precipitation methods. In South Africa more sophisticated and expensive nowcasting technology such as radar and lightning networks are available, supported by a fairly dense rain gauge network of about 1500 gauges. In the rest of southern Africa rainfall measurements are more difficult to obtain. The availability of the local version of the Unified Model and the Meteosat Second Generation satellite data make these products ideal components of precipitation measurement in data sparse regions such as Africa. In this article the local version of the Hydroestimator (originally from NOAA/NESDIS is discussed as well as its applications for precipitation measurement in this region. Hourly accumulations of the Hydroestimator are currently used as a satellite based precipitation estimator for the South African Flash Flood Guidance system. However, the Hydroestimator is by no means a perfect representation of the real rainfall. In this study the Hydroestimator and the stratiform rainfall field from the Unified Model are both bias corrected and then combined into a new precipitation field which can feed into the South African Flash Flood Guidance system. This new product should provide a more accurate and comprehensive input to the Flash Flood Guidance systems in South Africa as well as southern Africa. In this way the southern African region where data is sparse and very few radars are available can have access to more accurate flash flood guidance.

  8. Bias reduction for Satellite Based Precipitation Estimates using statistical transformations in Guiana Shield

    Science.gov (United States)

    Ringard, Justine; Becker, Melanie; Seyler, Frederique; Linguet, Laurent

    2016-04-01

    Currently satellite-based precipitation estimates exhibit considerable biases, and there have been many efforts to reduce these biases by merging surface gauge measurements with satellite-based estimates. In Guiana Shield all products exhibited better performances during the dry season (August- December). All products greatly overestimate very low intensities (50 mm). Moreover the responses of each product are different according to hydro climatic regimes. The aim of this study is to correct spatially the bias of precipitation, and compare various correction methods to define the best methods depending on the rainfall characteristic correcting (intensity, frequency). Four satellites products are used: Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) research product (3B42V7) and real time product (3B42RT), the Precipitation Estimation from Remotely-Sensed Information using Artificial Neural Network (PERSIANN) and the NOAA Climate Prediction Center (CPC) Morphing technique (CMORPH), for six hydro climatic regimes between 2001 and 2012. Several statistical transformations are used to correct the bias. Statistical transformations attempt to find a function h that maps a simulated variable Ps such that its new distribution equals the distribution of the observed variable Po. The first is the use of a distribution derived transformations which is a mixture of the Bernoulli and the Gamma distribution, where the Bernoulli distribution is used to model the probability of precipitation occurrence and the Gamma distribution used to model precipitation intensities. The second a quantile-quantile relation using parametric transformation, and the last one is a common approach using the empirical CDF of observed and modelled values instead of assuming parametric distributions. For each correction 30% of both, simulated and observed data sets, are used to calibrate and the other part used to validate. The validation are test with statistical

  9. Development of Radar-Satellite Blended QPF (Quantitative Precipitation Forecast) Technique for heavy rainfall

    Science.gov (United States)

    Jang, Sangmin; Yoon, Sunkwon; Rhee, Jinyoung; Park, Kyungwon

    2016-04-01

    Due to the recent extreme weather and climate change, a frequency and size of localized heavy rainfall increases and it may bring various hazards including sediment-related disasters, flooding and inundation. To prevent and mitigate damage from such disasters, very short range forecasting and nowcasting of precipitation amounts are very important. Weather radar data very useful in monitoring and forecasting because weather radar has high resolution in spatial and temporal. Generally, extrapolation based on the motion vector is the best method of precipitation forecasting using radar rainfall data for a time frame within a few hours from the present. However, there is a need for improvement due to the radar rainfall being less accurate than rain-gauge on surface. To improve the radar rainfall and to take advantage of the COMS (Communication, Ocean and Meteorological Satellite) data, a technique to blend the different data types for very short range forecasting purposes was developed in the present study. The motion vector of precipitation systems are estimated using 1.5km CAPPI (Constant Altitude Plan Position Indicator) reflectivity by pattern matching method, which indicates the systems' direction and speed of movement and blended radar-COMS rain field is used for initial data. Since the original horizontal resolution of COMS is 4 km while that of radar is about 1 km, spatial downscaling technique is used to downscale the COMS data from 4 to 1 km pixels in order to match with the radar data. The accuracies of rainfall forecasting data were verified utilizing AWS (Automatic Weather System) observed data for an extreme rainfall occurred in the southern part of Korean Peninsula on 25 August 2014. The results of this study will be used as input data for an urban stream real-time flood early warning system and a prediction model of landslide. Acknowledgement This research was supported by a grant (13SCIPS04) from Smart Civil Infrastructure Research Program funded by

  10. Vertical profiles of heating derived from IR-based precipitation estimates during FGGE SOP-1

    Science.gov (United States)

    Robertson, Franklin R.; Vincent, Dayton G.

    1988-01-01

    This paper examines a technique for retrieving from geostationary IR data the vertical profiles of heating and cooling due to moist diabatic processes. First, GOES IR imagery is used to estimate precipitation fields which are independent of fields inferred from residuals in heat budget analysis based on the FGGE level III-b data. Vertical distributions of the associated heating are then obtained using thermodynamic data from the level III-b analysis, one-dimensional cloud models, and the satellite-estimated precipitation. The technique was applied to infer heating in the South Pacific convergence zone during a portion of FGEE SOP-1, and the results were compared with heat-budget calculations made using the ECMWF analyses.

  11. Operational Estimation of Accumulated Precipitation using Satellite Observation, by Eumetsat Satellite Application facility in Support to Hydrology (H-SAF Consortium).

    Science.gov (United States)

    di Diodato, A.; de Leonibus, L.; Zauli, F.; Biron, D.; Melfi, D.

    2009-04-01

    compared by climatic thresholds got, basically, by the project "Climate Atlas of Europe" led by Meteo France inside the project ECSN (European Climate Support Network) of EUMETNET. To reduce the bias errors introduced by satellite estimates the rain gauge data are used to make an intercalibration with the satellite estimates, using information achieved by GTS network. Precipitation increments are estimated at each observation location from the observation and the interpolated background field. A field of the increments is carried out by standard Kriging method. The final precipitation analysis is achieved by the sum of the increments and the precipitation estimation at each grid points. It is also considered that major error sources in retrieval 15 minutes instantaneous precipitation from cloud top temperature comes from high (cold) non precipitating clouds and the use of same regression coefficients both for warm clouds (stratus) and cold clouds (convective). As that error is intrinsic in the blending technique applied, we are going to improve performances making use of cloud type specified retrievals. To apply such scheme on the products, we apply a discrimination from convective and stratified clouds, then we retrieve precipitation in parallel for the two clouds classes; the two outputs are merged again into one products, solving the double retrieval pixels keeping the convection retrieval. Basic tools for that is the computation of two different lookup tables to associate precipitation at a brightness temperature for the two kinds of cloudiness. The clouds discrimination will be done by the NWC-SAF product named "cloud type" for the stratified clouds and with an application, running operationally at Italian Met Service, named NEFODINA for automatic detection of convective phenomena. Results of studies to improve the accumulated precipitation as well are presented. The studies exploit the potential to use other source of information like quantitative precipitation

  12. Quantitative estimation of Nipah virus replication kinetics in vitro

    Directory of Open Access Journals (Sweden)

    Hassan Sharifah

    2006-06-01

    Full Text Available Abstract Background Nipah virus is a zoonotic virus isolated from an outbreak in Malaysia in 1998. The virus causes infections in humans, pigs, and several other domestic animals. It has also been isolated from fruit bats. The pathogenesis of Nipah virus infection is still not well described. In the present study, Nipah virus replication kinetics were estimated from infection of African green monkey kidney cells (Vero using the one-step SYBR® Green I-based quantitative real-time reverse transcriptase-polymerase chain reaction (qRT-PCR assay. Results The qRT-PCR had a dynamic range of at least seven orders of magnitude and can detect Nipah virus from as low as one PFU/μL. Following initiation of infection, it was estimated that Nipah virus RNA doubles at every ~40 minutes and attained peak intracellular virus RNA level of ~8.4 log PFU/μL at about 32 hours post-infection (PI. Significant extracellular Nipah virus RNA release occurred only after 8 hours PI and the level peaked at ~7.9 log PFU/μL at 64 hours PI. The estimated rate of Nipah virus RNA released into the cell culture medium was ~0.07 log PFU/μL per hour and less than 10% of the released Nipah virus RNA was infectious. Conclusion The SYBR® Green I-based qRT-PCR assay enabled quantitative assessment of Nipah virus RNA synthesis in Vero cells. A low rate of Nipah virus extracellular RNA release and low infectious virus yield together with extensive syncytial formation during the infection support a cell-to-cell spread mechanism for Nipah virus infection.

  13. Handling uncertainty in quantitative estimates in integrated resource planning

    Energy Technology Data Exchange (ETDEWEB)

    Tonn, B.E. [Oak Ridge National Lab., TN (United States); Wagner, C.G. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Mathematics

    1995-01-01

    This report addresses uncertainty in Integrated Resource Planning (IRP). IRP is a planning and decisionmaking process employed by utilities, usually at the behest of Public Utility Commissions (PUCs), to develop plans to ensure that utilities have resources necessary to meet consumer demand at reasonable cost. IRP has been used to assist utilities in developing plans that include not only traditional electricity supply options but also demand-side management (DSM) options. Uncertainty is a major issue for IRP. Future values for numerous important variables (e.g., future fuel prices, future electricity demand, stringency of future environmental regulations) cannot ever be known with certainty. Many economically significant decisions are so unique that statistically-based probabilities cannot even be calculated. The entire utility strategic planning process, including IRP, encompasses different types of decisions that are made with different time horizons and at different points in time. Because of fundamental pressures for change in the industry, including competition in generation, gone is the time when utilities could easily predict increases in demand, enjoy long lead times to bring on new capacity, and bank on steady profits. The purpose of this report is to address in detail one aspect of uncertainty in IRP: Dealing with Uncertainty in Quantitative Estimates, such as the future demand for electricity or the cost to produce a mega-watt (MW) of power. A theme which runs throughout the report is that every effort must be made to honestly represent what is known about a variable that can be used to estimate its value, what cannot be known, and what is not known due to operational constraints. Applying this philosophy to the representation of uncertainty in quantitative estimates, it is argued that imprecise probabilities are superior to classical probabilities for IRP.

  14. Precipitation estimation in mountainous terrain using multivariate geostatistics. Part II: isohyetal maps

    Science.gov (United States)

    Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.

    1992-01-01

    Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances

  15. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    Science.gov (United States)

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  16. Flood forecasting in Niger-Benue basin using satellite and quantitative precipitation forecast data

    Science.gov (United States)

    Haile, Alemseged Tamiru; Tefera, Fekadu Teshome; Rientjes, Tom

    2016-10-01

    Availability of reliable, timely and accurate rainfall data is constraining the establishment of flood forecasting and early warning systems in many parts of Africa. We evaluated the potential of satellite and weather forecast data as input to a parsimonious flood forecasting model to provide information for flood early warning in the central part of Nigeria. We calibrated the HEC-HMS rainfall-runoff model using rainfall data from post real time Tropical Rainfall Measuring Mission (TRMM) Multi satellite Precipitation Analysis product (TMPA). Real time TMPA satellite rainfall estimates and European Centre for Medium-Range Weather Forecasts (ECMWF) rainfall products were tested for flood forecasting. The implication of removing the systematic errors of the satellite rainfall estimates (SREs) was explored. Performance of the rainfall-runoff model was assessed using visual inspection of simulated and observed hydrographs and a set of performance indicators. The forecast skill was assessed for 1-6 days lead time using categorical verification statistics such as Probability Of Detection (POD), Frequency Of Hit (FOH) and Frequency Of Miss (FOM). The model performance satisfactorily reproduced the pattern and volume of the observed stream flow hydrograph of Benue River. Overall, our results show that SREs and rainfall forecasts from weather models have great potential to serve as model inputs for real-time flood forecasting in data scarce areas. For these data to receive application in African transboundary basins, we suggest (i) removing their systematic error to further improve flood forecast skill; (ii) improving rainfall forecasts; and (iii) improving data sharing between riparian countries.

  17. Q Conversion Factor Models for Estimating Precipitable Water Vapor for Turkey

    Science.gov (United States)

    Deniz, Ilke; Mekik, Cetin; Gurbuz, Gokhan

    2015-04-01

    precipitable water vapor is the conversion factor Q which is shown in Emardson and Derks' studies and also Jade and Vijayan's. Developing a regional model using either Tm-Ts equation or the conversion factor Q will provide a basis for GNSS Meteorology in Turkey which depends on the analysis of the radiosonde profile data. For this purpose, the radiosonde profiles from Istanbul, Ankara, Diyarbaki r, Samsun, Erzurum, Izmir, Isparta and Adana stations are analyzed with the radiosonde analysis algorithm in the context of the 'The Estimation of Atmospheric Water Vapour with GPS' Project which is funded by the Scientific and Technological Research Council of Turkey (TUBITAK). The Project is also in the COST Action ES1206: Advanced Global Navigation Satellite Systems tropospheric products for monitoring severe weather events and climate (GNSS4SWEC). In this study, regional models using the conversion factor Q are used for the determination of precipitable water vapor, and applied to the GNSS derived wet tropospheric zenith delays. Henceforth, the estimated precipitable water vapor and the precipitable water vapor obtained from the radiosonde station are compared. The average of the differences between RS and models for Istanbul and Ankara stations are obtained as 2.0±1.6 mm, 1.6±1.6 mm, respectively.

  18. Estimating return periods for daily precipitation extreme events over the Brazilian Amazon

    Science.gov (United States)

    Santos, Eliane Barbosa; Lucio, Paulo Sérgio; Santos e Silva, Cláudio Moisés

    2016-11-01

    This paper aims to model the occurrence of daily precipitation extreme events and to estimate the return period of these events through the extreme value theory (generalized extreme value distribution (GEV) and the generalized Pareto distribution (GPD)). The GEV and GPD were applied in precipitation series of homogeneous regions of the Brazilian Amazon. The GEV and GPD goodness of fit were evaluated by quantile-quantile (Q-Q) plot and by the application of the Kolmogorov-Smirnov (KS) test, which compares the cumulated empirical distributions with the theoretical ones. The Q-Q plot suggests that the probability distributions of the studied series are appropriated, and these results were confirmed by the KS test, which demonstrates that the tested distributions have a good fit in all sub-regions of Amazon, thus adequate to study the daily precipitation extreme event. For all return levels studied, more intense precipitation extremes is expected to occur within the South sub-regions and the coastal area of the Brazilian Amazon. The results possibly will have some practical application in local extreme weather forecast.

  19. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  20. The impact of assimilating radar-estimated rain rates on simulation of precipitation in the 17-18 July 1996 Chicago floods

    Science.gov (United States)

    Wang, Xingbao; Yau, M. K.; Nagarajan, B.; Fillion, Luc

    2010-03-01

    Rainfall prediction remains one of the most challenging problems in weather forecasting. In order to improve high-resolution quantitative precipitation forecasts (QPF), a new procedure for assimilating rainfall rate derived from radar composite reflectivity has been proposed and tested in a numerical simulation of the Chicago floods of 17-18 July 1996. The methodology is based on the one-dimensional variation scheme (1DVAR) assimilation approach introduced by Fillion and Errico but applied here using the Kain-Fritsch convective parameterization scheme (KF CPS). The novel feature of this work is the continuous assimilation of radar estimated rain rate over a three hour period, rather than a single assimilation at the initial (analysis) time. Most of the characteristics of this precipitation event, including the propagation, regeneration of mesoscale convective systems, the frontal boundary across the Midwest and the evolution of the low-level jet are better captured in the simulation as the radar-estimated precipitation rate is assimilated. The results indicate that precipitation assimilation during the early stage can improve the simulated mesoscale feature of the convection system and shorten the spin-up time significantly. Comparison of precipitation forecasts between the experiments with and without the 1DVAR indicates that the 1DVAR scheme has a positive impact on the QPF up to 36 hours in terms of the bias and bias equalized threat scores.

  1. Impact of time displaced precipitation estimates for on-line updated models

    DEFF Research Database (Denmark)

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2012-01-01

    catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data......When an online runoff model is updated from system measurements the requirements to the precipitation estimates change. Using rain gauge data as precipitation input there will be a displacement between the time where the rain intensity hits the gauge and the time where the rain hits the actual...... is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple timearea model and historic rain series...

  2. A Parameter Estimation Scheme for Multiscale Kalman Smoother (MKS) Algorithm Used in Precipitation Data Fusion

    Science.gov (United States)

    Wang, Shugong; Liang, Xu

    2013-01-01

    A new approach is presented in this paper to effectively obtain parameter estimations for the Multiscale Kalman Smoother (MKS) algorithm. This new approach has demonstrated promising potentials in deriving better data products based on data of different spatial scales and precisions. Our new approach employs a multi-objective (MO) parameter estimation scheme (called MO scheme hereafter), rather than using the conventional maximum likelihood scheme (called ML scheme) to estimate the MKS parameters. Unlike the ML scheme, the MO scheme is not simply built on strict statistical assumptions related to prediction errors and observation errors, rather, it directly associates the fused data of multiple scales with multiple objective functions in searching best parameter estimations for MKS through optimization. In the MO scheme, objective functions are defined to facilitate consistency among the fused data at multiscales and the input data at their original scales in terms of spatial patterns and magnitudes. The new approach is evaluated through a Monte Carlo experiment and a series of comparison analyses using synthetic precipitation data. Our results show that the MKS fused precipitation performs better using the MO scheme than that using the ML scheme. Particularly, improvements are significant compared to that using the ML scheme for the fused precipitation associated with fine spatial resolutions. This is mainly due to having more criteria and constraints involved in the MO scheme than those included in the ML scheme. The weakness of the original ML scheme that blindly puts more weights onto the data associated with finer resolutions is overcome in our new approach.

  3. Predicting cement distribution in geothermal sandstone reservoirs based on estimates of precipitation temperatures

    Science.gov (United States)

    Olivarius, Mette; Weibel, Rikke; Whitehouse, Martin; Kristensen, Lars; Hjuler, Morten L.; Mathiesen, Anders; Boyce, Adrian J.; Nielsen, Lars H.

    2016-04-01

    Exploitation of geothermal sandstone reservoirs is challenged by pore-cementing minerals since they reduce the fluid flow through the sandstones. Geothermal exploration aims at finding sandstone bodies located at depths that are adequate for sufficiently warm water to be extracted, but without being too cemented for warm water production. The amount of cement is highly variable in the Danish geothermal reservoirs which mainly comprise the Bunter Sandstone, Skagerrak and Gassum formations. The present study involves bulk and in situ stable isotope analyses of calcite, dolomite, ankerite, siderite and quartz in order to estimate at what depth they were formed and enable prediction of where they can be found. The δ18O values measured in the carbonate minerals and quartz overgrowths are related to depth since they are a result of the temperatures of the pore fluid. Thus the values indicate the precipitation temperatures and they fit the relative diagenetic timing identified by petrographical observations. The sandstones deposited during arid climatic conditions contain calcite and dolomite cement that formed during early diagenesis. These carbonate minerals precipitated as a response to different processes, and precipitation of macro-quartz took over at deeper burial. Siderite was the first carbonate mineral that formed in the sandstones that were deposited in a humid climate. Calcite began precipitating at increased burial depth and ankerite formed during deep burial and replaced some of the other phases. Ankerite and quartz formed in the same temperature interval so constrains on the isotopic composition of the pore fluid can be achieved. Differences in δ13C values exist between the sandstones that were deposited in arid versus humid environments, which suggest that different kinds of processes were active. The estimated precipitation temperatures of the different cement types are used to predict which of them are present in geothermal sandstone reservoirs in

  4. TRMM Science Highlights and Status of Precipitation Estimates on Monthly and Finder Time Scales

    Science.gov (United States)

    Adler, Robert; Einaudi, Franco (Technical Monitor)

    2001-01-01

    The Tropical Rainfall Measuring Mission (TRMM) has completed three years in orbit. A summary of research highlights will be presented focusing on application of TRMM data to topics ranging from climate analysis, through improving forecasts, to microphysical research. Monthly surface rainfall estimates over the ocean based on different instruments on TRMM currently differ by 20%. The difference is not surprising considering the different type of observations available for the first time from TRMM with both the passive and active microwave sensors. Resolving this difference will strengthen the validity and utility of ocean rainfall estimates and is the topic of ongoing research utilizing various facets of the TRMM validation and field experiment programs. The TRMM rainfall estimates are intercompared among themselves and with other estimates, including those of the standard, monthly Global Precipitation Climatology Project (GPCP) analysis. The GPCP analysis agrees roughly in magnitude with the passive microwave-based TRMM estimates which is not surprising considering GPCP over-ocean estimates are based on passive microwave observations. A three year TRMM rainfall climatology is presented based on the TRMM merged product, including anomaly fields related to the changing ENSO situation during the mission. Results of merging TRMM, other passive microwave observations, and geosynchronous infrared rainfall estimates into a global, tropical 3-hour time resolution analysis will also be described.

  5. Quantitative Estimates of Bio-Remodeling on Coastal Rock Surfaces

    Directory of Open Access Journals (Sweden)

    Marta Pappalardo

    2016-05-01

    Full Text Available Remodeling of rocky coasts and erosion rates have been widely studied in past years, but not all the involved processes acting over rocks surface have been quantitatively evaluated yet. The first goal of this paper is to revise the different methodologies employed in the quantification of the effect of biotic agents on rocks exposed to coastal morphologic agents, comparing their efficiency. Secondly, we focus on geological methods to assess and quantify bio-remodeling, presenting some case studies in an area of the Mediterranean Sea in which different geological methods, inspired from the revised literature, have been tested in order to provide a quantitative assessment of the effects some biological covers exert over rocky platforms in tidal and supra-tidal environments. In particular, different experimental designs based on Schmidt hammer test results have been applied in order to estimate rock hardness related to different orders of littoral platforms and the bio-erosive/bio-protective role of Chthamalus ssp. and Verrucariaadriatica. All data collected have been analyzed using statistical tests to evaluate the significance of the measures and methodologies. The effectiveness of this approach is analyzed, and its limits are highlighted. In order to overcome the latter, a strategy combining geological and experimental–computational approaches is proposed, potentially capable of revealing novel clues on bio-erosion dynamics. An experimental-computational proposal, to assess the indirect effects of the biofilm coverage of rocky shores, is presented in this paper, focusing on the shear forces exerted during hydration-dehydration cycles. The results of computational modeling can be compared to experimental evidence, from nanoscopic to macroscopic scales.

  6. Comparison of recorded rainfall with quantitative precipitation forecast in a rainfall-runoff simulation for the Langat River Basin, Malaysia

    Science.gov (United States)

    Billa, Lawal; Assilzadeh, Hamid; Mansor, Shattri; Mahmud, Ahmed; Ghazali, Abdul

    2011-09-01

    Observed rainfall is used for runoff modeling in flood forecasting where possible, however in cases where the response time of the watershed is too short for flood warning activities, a deterministic quantitative precipitation forecast (QPF) can be used. This is based on a limited-area meteorological model and can provide a forecasting horizon in the order of six hours or less. This study applies the results of a previously developed QPF based on a 1D cloud model using hourly NOAA-AVHRR (Advanced Very High Resolution Radiometer) and GMS (Geostationary Meteorological Satellite) datasets. Rainfall intensity values in the range of 3-12 mm/hr were extracted from these datasets based on the relation between cloud top temperature (CTT), cloud reflectance (CTR) and cloud height (CTH) using defined thresholds. The QPF, prepared for the rainstorm event of 27 September to 8 October 2000 was tested for rainfall runoff on the Langat River Basin, Malaysia, using a suitable NAM rainfall-runoff model. The response of the basin both to the rainfall-runoff simulation using the QPF estimate and the recorded observed rainfall is compared here, based on their corresponding discharge hydrographs. The comparison of the QPF and recorded rainfall showed R2 = 0.9028 for the entire basin. The runoff hydrograph for the recorded rainfall in the Kajang sub-catchment showed R2 = 0.9263 between the observed and the simulated, while that of the QPF rainfall was R2 = 0.819. This similarity in runoff suggests there is a high level of accuracy shown in the improved QPF, and that significant improvement of flood forecasting can be achieved through `Nowcasting', thus increasing the response time for flood early warnings.

  7. Quantitative TEM analysis of precipitation and grain boundary segregation in neutron irradiated EUROFER 97

    Science.gov (United States)

    Dethloff, Christian; Gaganidze, Ermile; Aktaa, Jarir

    2014-11-01

    Characterization of irradiation induced microstructural defects is essential for assessing the applicability of structural steels like the Reduced Activation Ferritic/Martensitic steel EUROFER 97 in upcoming fusion reactors. In this work Transmission Electron Microscopy (TEM) is used to analyze the types and structure of precipitates, and the evolution of their size distributions and densities caused by neutron irradiation to a dose of 32 displacements per atom (dpa) at 330-340 °C in the irradiation experiment ARBOR 1. A significant growth of MX and M23C6 type precipitates is observed after neutron irradiation, while the precipitate density remains unchanged. Hardening caused by MX and M23C6 precipitate growth is assessed by applying the Dispersed Barrier Hardening (DBH) model, and shown to be of minor importance when compared to other irradiation effects like dislocation loop formation. Additionally, grain boundary segregation of chromium induced by neutron irradiation was investigated and detected in irradiated specimens.

  8. Evaluation and Correction of Quantitative Precipitation Forecast by Storm-Scale NWP Model in Jiangsu, China

    Directory of Open Access Journals (Sweden)

    Gaili Wang

    2016-01-01

    Full Text Available With the development of high-performance computer systems and data assimilation techniques, storm-scale numerical weather prediction (NWP models are gradually used for short-term deterministic forecasts. The primary objective of this study is to evaluate and correct precipitation forecasts of a storm-scale NWP model called the advanced regional prediction system (ARPS. The evaluation and correction consider five heavy precipitation events that occurred in the summer of 2015 in Jiangsu, China. The performances of the original and corrected ARPS precipitation forecasts are evaluated as a function of lead time using standard measurements and a spatial verification method called Structure-Amplitude-Location (SAL. In general, the ARPS could not produce optimal forecasts for very short lead times, and the forecast accuracy improves with increasing lead time. The ARPS overestimates precipitation for all lead times, which is confirmed by large bias in many forecasts in the first and second quadrant of the diagram of SAL, especially at the 1 h lead time. The amplitude correction is performed by matching percentile values of the ARPS precipitation forecasts and observations for each lead time. Amplitude correction significantly improved the ARPS precipitation forecasts in terms of the considered performance indices of standard measures and A-component and S-component of SAL.

  9. Probabilistic correction of precipitation measurement errors using a Bayesian Model Average Approach applied for the estimation of glacier accumulation

    Science.gov (United States)

    Moya Quiroga, Vladimir; Mano, Akira; Asaoka, Yoshihiro; Udo, Keiko; Kure, Shuichi; Mendoza, Javier

    2013-04-01

    Precipitation is a major component of the water cycle that returns atmospheric water to the ground. Without precipitation there would be no water cycle, all the water would run down the rivers and into the seas, then the rivers would dry up with no fresh water from precipitation. Although precipitation measurement seems an easy and simple procedure, it is affected by several systematic errors which lead to underestimation of the actual precipitation. Hence, precipitation measurements should be corrected before their use. Different correction approaches were already suggested in order to correct precipitation measurements. Nevertheless, focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this presentation we propose a Bayesian model average (BMA) approach for correcting rain gauge measurement errors. In the present study we used meteorological data recorded every 10 minutes at the Condoriri station in the Bolivian Andes. Comparing rain gauge measurements with totalisators rain measurements it was possible to estimate the rain underestimation. First, different deterministic models were optimized for the correction of precipitation considering wind effect and precipitation intensities. Then, probabilistic BMA correction was performed. The corrected precipitation was then separated into rainfall and snowfall considering typical Andean temperature thresholds of -1°C and 3°C. Hence, precipitation was separated into rainfall, snowfall and mixed precipitation. Then, relating the total snowfall with the glacier ice density, it was possible to estimate the glacier accumulation. Results show a yearly glacier accumulation of 1200 mm/year. Besides, results confirm that in tropical glaciers winter is not accumulation period, but a low ablation one. Results show that neglecting such correction may induce an underestimation higher than 35 % of total precipitation. Besides, the uncertainty range may induce differences up

  10. Global estimate of lichen and bryophyte contributions to forest precipitation interception

    Science.gov (United States)

    Van Stan, John; Porada, Philipp; Kleidon, Axel

    2017-04-01

    Interception of precipitation by forest canopies plays an important role in its partitioning to evaporation, transpiration and runoff. Field observations show arboreal lichens and bryophytes can substantially enhance forests' precipitation storage and evaporation. However, representations of canopy interception in global land surface models currently ignore arboreal lichen and bryophyte contributions. This study uses the lichen and bryophyte model (LiBry) to provide the first process-based modelling approach estimating these organisms' contributions to canopy water storage and evaporation. The global mean value of forest water storage capacity increased significantly from 0.87 mm to 1.33 mm by the inclusion of arboreal poikilohydric organisms. Global forest canopy evaporation of intercepted precipitation was also greatly enhanced by 44%. Ratio of total versus bare canopy global evaporation exceeded 2 in many forested regions. This altered global patterns in canopy water storage, evaporation, and ultimately the proportion of rainfall evaporated. A sensitivity analysis was also performed. Results indicate rainfall interception is of larger magnitude than previously reported by global land surface modelling work because of the important role of lichen and bryophytes in rainfall interception.

  11. Estimating the exceedance probability of extreme rainfalls up to the probable maximum precipitation

    Science.gov (United States)

    Nathan, Rory; Jordan, Phillip; Scorah, Matthew; Lang, Simon; Kuczera, George; Schaefer, Melvin; Weinmann, Erwin

    2016-12-01

    If risk-based criteria are used in the design of high hazard structures (such as dam spillways and nuclear power stations), then it is necessary to estimate the annual exceedance probability (AEP) of extreme rainfalls up to and including the Probable Maximum Precipitation (PMP). This paper describes the development and application of two largely independent methods to estimate the frequencies of such extreme rainfalls. One method is based on stochastic storm transposition (SST), which combines the "arrival" and "transposition" probabilities of an extreme storm using the total probability theorem. The second method, based on "stochastic storm regression" (SSR), combines frequency curves of point rainfalls with regression estimates of local and transposed areal rainfalls; rainfall maxima are generated by stochastically sampling the independent variates, where the required exceedance probabilities are obtained using the total probability theorem. The methods are applied to two large catchments (with areas of 3550 km2 and 15,280 km2) located in inland southern Australia. Both methods were found to provide similar estimates of the frequency of extreme areal rainfalls for the two study catchments. The best estimates of the AEP of the PMP for the smaller and larger of the catchments were found to be 10-7 and 10-6, respectively, but the uncertainty of these estimates spans one to two orders of magnitude. Additionally, the SST method was applied to a range of locations within a meteorologically homogenous region to investigate the nature of the relationship between the AEP of PMP and catchment area.

  12. Daily precipitation estimation through different microwave sensors: Verification study over Italy

    Science.gov (United States)

    Ciabatta, Luca; Marra, Anna Cinzia; Panegrossi, Giulia; Casella, Daniele; Sanò, Paolo; Dietrich, Stefano; Massari, Christian; Brocca, Luca

    2017-02-01

    The accurate estimation of rainfall from remote sensing is of paramount importance for many applications as, for instance, the mitigation of natural hazards like floods, droughts, and landslides. Traditionally, microwave observations in the frequency between 10 and 183 GHz are used for estimating rainfall based on the direct interaction of radiation with the hydrometeors within precipitating clouds in a so-called top-down approach. Recently, a bottom-up approach was proposed that uses satellite soil moisture products derived from microwave observations (nature. In this study, we perform a long-term (3 years) assessment of different satellite rainfall products exploiting the full range of microwave frequencies over Italy. Specifically, the integration of two top-down algorithms (CDRD, Cloud Dynamics and Radiation Database, and PNPR, Passive microwave Neural network Precipitation Retrieval) for estimating rainfall from conically and cross-track scanning radiometers, and one bottom-up algorithm (SM2RAIN) applied to the Advanced SCATterometer soil moisture product is carried out. The performances of the products, individually and merged together, are assessed at daily time scale. The integration of top-down and bottom-up approaches provides the highest performance both in terms of continuous and categorical scores (i.e., median correlation coefficient and root mean square error values equal to 0.71 and 6.62 mm, respectively). In such a combination, the limitations of the two approaches are compensated allowing a better estimation of ground accumulated rainfall through SM2RAIN while, overcoming the limitations of rainfall estimation for intense events during wet conditions through CDRD-PNPR product. The accuracy and the reliability of the merged product open new possibilities for their testing in hydrological applications, such as the monitoring and prediction of floods and droughts over large areas, including regions where ground-based measurements are sparse or not

  13. Quantitative Model for Estimating Soil Erosion Rates Using 137Cs

    Institute of Scientific and Technical Information of China (English)

    YANGHAO; GHANGQING; 等

    1998-01-01

    A quantitative model was developed to relate the amount of 137Cs loss from the soil profile to the rate of soil erosion,According th mass balance model,the depth distribution pattern of 137Cs in the soil profile ,the radioactive decay of 137Cs,sampling year and the difference of 137Cs fallout amount among years were taken into consideration.By introducing typical depth distribution functions of 137Cs into the model ,detailed equations for the model were got for different soil,The model shows that the rate of soil erosion is mainly controlled by the depth distrbution pattern of 137Cs ,the year of sampling,and the percentage reduction in total 137Cs,The relationship between the rate of soil loss and 137Cs depletion i neither linear nor logarithmic,The depth distribution pattern of 137Cs is a major factor for estimating the rate of soil loss,Soil erosion rate is directly related with the fraction of 137Cs content near the soil surface. The influences of the radioactive decay of 137Cs,sampling year and 137Cs input fraction are not large compared with others.

  14. Novel whole brain segmentation and volume estimation using quantitative MRI

    Energy Technology Data Exchange (ETDEWEB)

    West, J. [Linkoeping University, Radiation Physics, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); SyntheticMR AB, Linkoeping (Sweden); Warntjes, J.B.M. [Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); SyntheticMR AB, Linkoeping (Sweden); Linkoeping University and Department of Clinical Physiology UHL, County Council of Oestergoetland, Clinical Physiology, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Lundberg, P. [Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); Linkoeping University and Department of Radiation Physics UHL, County Council of Oestergoetland, Radiation Physics, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Linkoeping University and Department of Radiology UHL, County Council of Oestergoetland, Radiology, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden)

    2012-05-15

    Brain segmentation and volume estimation of grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) are important for many neurological applications. Volumetric changes are observed in multiple sclerosis (MS), Alzheimer's disease and dementia, and in normal aging. A novel method is presented to segment brain tissue based on quantitative magnetic resonance imaging (qMRI) of the longitudinal relaxation rate R{sub 1}, the transverse relaxation rate R{sub 2} and the proton density, PD. Previously reported qMRI values for WM, GM and CSF were used to define tissues and a Bloch simulation performed to investigate R{sub 1}, R{sub 2} and PD for tissue mixtures in the presence of noise. Based on the simulations a lookup grid was constructed to relate tissue partial volume to the R{sub 1}-R{sub 2}-PD space. The method was validated in 10 healthy subjects. MRI data were acquired using six resolutions and three geometries. Repeatability for different resolutions was 3.2% for WM, 3.2% for GM, 1.0% for CSF and 2.2% for total brain volume. Repeatability for different geometries was 8.5% for WM, 9.4% for GM, 2.4% for CSF and 2.4% for total brain volume. We propose a new robust qMRI-based approach which we demonstrate in a patient with MS. (orig.)

  15. Extreme Precipitation Estimation with Typhoon Morakot Using Frequency and Spatial Analysis

    Directory of Open Access Journals (Sweden)

    Hone-Jay Chu

    2011-01-01

    Full Text Available Typhoon Morakot lashed Taiwan and produced copious amounts of precipitation in 2009. From the point view of hydrological statistics, the impact of the precipitation from typhoon Morakot using a frequency analysis can be analyzed and discussed. The frequency curve, which was fitted mathematically to historical observed data, can be used to estimate the probability of exceedance for runoff events of a certain magnitude. The study integrates frequency analysis and spatial analysis to assess the effect of Typhoon Morakot event on rainfall frequency in the Gaoping River basin of southern Taiwan. First, extreme rainfall data are collected at sixteen stations for durations of 1, 3, 6, 12, and 24 hours and then an appropriate probability distribution was selected to analyze the impact of the extreme hydrological event. Spatial rainfall patterns for a return period of 200-yr with 24-hr duration with and without Typhoon Morakot are estimated. Results show that the rainfall amount is significantly different with long duration with and without the event for frequency analysis. Furthermore, spatial analysis shows that extreme rainfall for a return period of 200-yr is highly dependent on topography and is smaller in the southwest than that in the east. The results not only demonstrate the distinct effect of Typhoon Morakot on frequency analysis, but also could provide reference in future planning of hydrological engineering.

  16. Comparison of spatial interpolation methods for the estimation of precipitation distribution in Distrito Federal, Brazil

    Science.gov (United States)

    Borges, Pablo de Amorim; Franke, Johannes; da Anunciação, Yumiko Marina Tanaka; Weiss, Holger; Bernhofer, Christian

    2016-01-01

    Available climatological information of Distrito Federal does not satisfy the requirements for detailed climate diagnosis, as they do not provide the necessary spatial resolution for water resources management purposes. Annual and seasonal climatology (1971-2000) of precipitation from 6 meteorological stations and 54 rain gauges from Central Brazil were used to test eight different spatial interpolation methods. Geographical factors (i.e., altitude, longitude and latitude) explain a large portion of precipitation in the region, and therefore, multivariate models were included. The performance of estimations was assessed through independent validation using mean square error, correlation coefficient and Nash-Sutcliffe efficiency criterion. Inverse distance weighting (IDW), ordinary kriging (OK) and the multivariate regression with interpolation of residuals by IDW (MRegIDW) and OK (MRegOK) have performed the lowest errors and the highest correlation and Nash-Sutcliffe efficiency criterion. In general, interpolation methods provide similar spatial distributions of rainfall wherever observation network is dense. However, the inclusion of geographical variables to the interpolation method should improve estimates in areas where the observation network density is low. Nevertheless, the assessment of uncertainties using a geostatistical method provides supplementary and qualitative information which should be considered when interpreting the spatial distribution of rainfall.

  17. Use of objective analysis to estimate winter temperature and precipitation at different stations over western Himalaya

    Indian Academy of Sciences (India)

    Jagdish Chandra Joshi; Ashwagosha Ganju

    2010-10-01

    Temperature and fresh snow are essential inputs in an avalanche forecasting model.Without these parameters,prediction of avalanche occurrence for a region would be very difficult.In the complex terrain of Himalaya,nonavailability of snow and meteorological data of the remote locations during snow storms in the winter is a common occurrence.In view of this persistent problem present study estimates maximum temperature,minimum temperature,ambient temperature and precipitation intensity on different regions of Indian western Himalaya by using similar parameters of the neighbouring regions.The location at which parameters are required and its neighbouring locations should all fall in the same snow climatic zone.Initial step to estimate the parameters at a location,is to shift the parameters of neighbouring regions at a reference height corresponding to the altitude of the location at which parameters are to be estimated.The parameters at this reference height are then spatially interpolated by using Barnes objective analysis.The parameters estimated on different locations are compared with the observed one and the Root Mean Square Errors (RMSE)of the observed and estimated values of the parameters are discussed for the winters of 2007 –2008.

  18. Capturing heterogeneity: The role of a study area's extent for estimating net precipitation

    Science.gov (United States)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-04-01

    Accurate and precise estimates of net precipitation are required for many hydrological applications. For instance, most interception models require high quality estimates of the canopy storage capacity and the free throughfall coefficient. Good estimates of these parameters, in turn, critically depend on the quality of throughfall estimates. Previous attempts to guide throughfall sampling focused on the selection of an appropriate sample size, support, and sampling design. Comparatively little attention has been given to the role of the extent, i.e. the size of the area under study. In this contribution we investigate the influence of the extent on the representativeness of mean throughfall estimates for simply structured and heterogeneous forest ecosystems. We based our investigation on stochastic simulations which we derived from large empirical throughfall datasets. Using the simulated throughfall fields, we conducted virtual sampling experiments using a number of typical extents. We ran these tests both for a range of event sizes and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the required temporal resolution of the throughfall data (i.e. event-based versus long-term) and to the complexity of the system under study.

  19. Sensitivity of quantitative precipitation forecasts to boundary layer parameterization: a flash flood case study in the Western Mediterranean

    Directory of Open Access Journals (Sweden)

    M. Zampieri

    2005-01-01

    Full Text Available The 'Montserrat-2000' severe flash flood event which occurred over Catalonia on 9 and 10 June 2000 is analyzed. Strong precipitation was generated by a mesoscale convective system associated with the development of a cyclone. The location of heavy precipitation depends on the position of the cyclone, which, in turn, is found to be very sensitive to various model characteristics and initial conditions. Numerical simulations of this case study using the hydrostatic BOLAM and the non-hydrostatic MOLOCH models are performed in order to test the effects of different formulations of the boundary layer parameterization: a modified version of the Louis (order 1 model and a custom version of the E-ℓ (order 1.5 model. Both of them require a diagnostic formulation of the mixing length, but the use of the turbulent kinetic energy equation in the E-ℓ model allows to represent turbulence history and non-locality effects and to formulate a more physically based mixing length. The impact of the two schemes is different in the two models. The hydrostatic model, run at 1/5 degree resolution, is less sensitive, but the quantitative precipitation forecast is in any case unsatisfactory in terms of localization and amount. Conversely, the non-hydrostatic model, run at 1/50 degree resolution, is capable of realistically simulate timing, position and amount of precipitation, with the apparently superior results obtained with the E-ℓ parameterization model.

  20. Precipitation estimation in mountainous terrain using multivariate geostatistics. Part I: structural analysis

    Science.gov (United States)

    Hevesi, Joseph A.; Istok, Jonathan D.; Flint, Alan L.

    1992-01-01

    Values of average annual precipitation (AAP) are desired for hydrologic studies within a watershed containing Yucca Mountain, Nevada, a potential site for a high-level nuclear-waste repository. Reliable values of AAP are not yet available for most areas within this watershed because of a sparsity of precipitation measurements and the need to obtain measurements over a sufficient length of time. To estimate AAP over the entire watershed, historical precipitation data and station elevations were obtained from a network of 62 stations in southern Nevada and southeastern California. Multivariate geostatistics (cokriging) was selected as an estimation method because of a significant (p = 0.05) correlation of r = .75 between the natural log of AAP and station elevation. A sample direct variogram for the transformed variable, TAAP = ln [(AAP) 1000], was fitted with an isotropic, spherical model defined by a small nugget value of 5000, a range of 190 000 ft, and a sill value equal to the sample variance of 163 151. Elevations for 1531 additional locations were obtained from topographic maps to improve the accuracy of cokriged estimates. A sample direct variogram for elevation was fitted with an isotropic model consisting of a nugget value of 5500 and three nested transition structures: a Gaussian structure with a range of 61 000 ft, a spherical structure with a range of 70 000 ft, and a quasi-stationary, linear structure. The use of an isotropic, stationary model for elevation was considered valid within a sliding-neighborhood radius of 120 000 ft. The problem of fitting a positive-definite, nonlinear model of coregionalization to an inconsistent sample cross variogram for TAAP and elevation was solved by a modified use of the Cauchy-Schwarz inequality. A selected cross-variogram model consisted of two nested structures: a Gaussian structure with a range of 61 000 ft and a spherical structure with a range of 190 000 ft. Cross validation was used for model selection and for

  1. Quantitative characterization of agglomerates and aggregates of pyrogenic and precipitated amorphous silica nanomaterials by transmission electron microscopy

    Directory of Open Access Journals (Sweden)

    De Temmerman Pieter-Jan

    2012-06-01

    Full Text Available Abstract Background The interaction of a nanomaterial (NM with a biological system depends not only on the size of its primary particles but also on the size, shape and surface topology of its aggregates and agglomerates. A method based on transmission electron microscopy (TEM, to visualize the NM and on image analysis, to measure detected features quantitatively, was assessed for its capacity to characterize the aggregates and agglomerates of precipitated and pyrogenic synthetic amorphous silicon dioxide (SAS, or silica, NM. Results Bright field (BF TEM combined with systematic random imaging and semi-automatic image analysis allows measuring the properties of SAS NM quantitatively. Automation allows measuring multiple and arithmetically complex parameters simultaneously on high numbers of detected particles. This reduces operator-induced bias and assures a statistically relevant number of measurements, avoiding the tedious repetitive task of manual measurements. Access to multiple parameters further allows selecting the optimal parameter in function of a specific purpose. Using principle component analysis (PCA, twenty-three measured parameters were classified into three classes containing measures for size, shape and surface topology of the NM. Conclusion The presented method allows a detailed quantitative characterization of NM, like dispersions of precipitated and pyrogenic SAS based on the number-based distributions of their mean diameter, sphericity and shape factor.

  2. Quantitative TEM analysis of precipitation and grain boundary segregation in neutron irradiated EUROFER 97

    Energy Technology Data Exchange (ETDEWEB)

    Dethloff, Christian, E-mail: christian.dethloff@kit.edu; Gaganidze, Ermile; Aktaa, Jarir

    2014-11-15

    Characterization of irradiation induced microstructural defects is essential for assessing the applicability of structural steels like the Reduced Activation Ferritic/Martensitic steel EUROFER 97 in upcoming fusion reactors. In this work Transmission Electron Microscopy (TEM) is used to analyze the types and structure of precipitates, and the evolution of their size distributions and densities caused by neutron irradiation to a dose of 32 displacements per atom (dpa) at 330–340 °C in the irradiation experiment ARBOR 1. A significant growth of MX and M{sub 23}C{sub 6} type precipitates is observed after neutron irradiation, while the precipitate density remains unchanged. Hardening caused by MX and M{sub 23}C{sub 6} precipitate growth is assessed by applying the Dispersed Barrier Hardening (DBH) model, and shown to be of minor importance when compared to other irradiation effects like dislocation loop formation. Additionally, grain boundary segregation of chromium induced by neutron irradiation was investigated and detected in irradiated specimens.

  3. Depth-area-duration characteristics of storm rainfall in Texas using Multi-Sensor Precipitation Estimates

    Science.gov (United States)

    McEnery, J. A.; Jitkajornwanich, K.

    2012-12-01

    This presentation will describe the methodology and overall system development by which a benchmark dataset of precipitation information has been used to characterize the depth-area-duration relations in heavy rain storms occurring over regions of Texas. Over the past two years project investigators along with the National Weather Service (NWS) West Gulf River Forecast Center (WGRFC) have developed and operated a gateway data system to ingest, store, and disseminate NWS multi-sensor precipitation estimates (MPE). As a pilot project of the Integrated Water Resources Science and Services (IWRSS) initiative, this testbed uses a Standard Query Language (SQL) server to maintain a full archive of current and historic MPE values within the WGRFC service area. These time series values are made available for public access as web services in the standard WaterML format. Having this volume of information maintained in a comprehensive database now allows the use of relational analysis capabilities within SQL to leverage these multi-sensor precipitation values and produce a valuable derivative product. The area of focus for this study is North Texas and will utilize values that originated from the West Gulf River Forecast Center (WGRFC); one of three River Forecast Centers currently represented in the holdings of this data system. Over the past two decades, NEXRAD radar has dramatically improved the ability to record rainfall. The resulting hourly MPE values, distributed over an approximate 4 km by 4 km grid, are considered by the NWS to be the "best estimate" of rainfall. The data server provides an accepted standard interface for internet access to the largest time-series dataset of NEXRAD based MPE values ever assembled. An automated script has been written to search and extract storms over the 18 year period of record from the contents of this massive historical precipitation database. Not only can it extract site-specific storms, but also duration-specific storms and

  4. Developing Methodologies for Applying TRMM-Estimated Precipitation Data to Hydrological Modeling of a South TX Watershed - Initial Results

    Science.gov (United States)

    Tobin, K. J.; Bennett, M. E.

    2007-05-01

    Previous experience with hydrological modeling in South Texas, which is located along the Texas-Mexico border, suggests that NWS ground measurements are too widely scattered to provide reliable precipitation input for modeling. In addition, a significant fraction of the study region is located at the edge of the coverage envelopes of the NWS NEXRAD weather radars present in the region limiting the accuracy of these systems to provide reliable precipitation estimates. Therefore, we are exploring whether TRMM estimated-precipitation data (3B42), in some form, can be used to support hydrological modeling in the Middle Rio Grande and Nueces River Basin watersheds. We have begun our modeling efforts by focusing on the middle Nueces watershed (7770 sq km). To model this largely rural watershed we selected the Soil and Water Assessment Tool (SWAT). Three precipitation datasets were selected for our initial model runs that include: (1) nearest NWS cooperative hourly rain gauge data, (2) three hourly TRMM 3B42 estimated precipitation, and (3) combination TRMM 3B42/NWS rain gauge datasets in which ground measurements are used for three hourly periods lacking high quality satellite microwave precipitation estimates as determined from TRMM 3G68 data. Three dataset were aggregated into an average daily estimate of precipitation for each TRMM grid cell. Manual calibration of was completed achieving model results that yield realistic monthly and annual water balances with both gauge and satellite estimate precipitation datasets. In the future, we plan to use the newly developed automatic calibration routine for SWAT, which is based on the Shuffled Complex Evolution algorithm, to optimize modeled discharge results from this study.

  5. Quantitative estimates of past changes in ITCZ position and cross-equatorial atmospheric heat transport

    Science.gov (United States)

    McGee, D.; Donohoe, A.; Marshall, J.; Ferreira, D.

    2012-12-01

    The mean position and seasonal migration of the Intertropical Convergence Zone (ITCZ) govern the intensity, spatial distribution and seasonality of precipitation throughout the tropics as well as the magnitude and direction of interhemispheric atmospheric heat transport (AHT). As a result of these links to global tropical precipitation and hemispheric heat budgets, paleoclimate studies have commonly sought to use reconstructions of local precipitation and surface winds to identify past shifts in the ITCZ's mean position or seasonal extent. Records indicate close ties between ITCZ position and interhemispheric surface temperature gradients in past climates, with the ITCZ shifting toward the warmer hemisphere. This shift would increase AHT into the cooler hemisphere to at least partially compensate for cooling there. Despite widespread qualitative evidence consistent with ITCZ shifts, few proxy records offer quantitative estimates of the distance of these shifts or of the associated changes in AHT. Here we present a strategy for placing quantitative limits on past changes in mean annual ITCZ position and interhemispheric AHT based on explorations of the modern seasonal cycle and models of present and past climates. We use reconstructions of tropical sea surface temperature gradients to place bounds on globally averaged ITCZ position and interhemispheric AHT during the Last Glacial Maximum, Heinrich Stadial 1, and the Mid-Holocene (6 ka). Though limited by the small number of SST records available, our results suggest that past shifts in the global mean ITCZ were small, typically less than 1 degree of latitude. Past changes in interhemispheric AHT may have been substantial, with anomalies approximately equal to the magnitude of modern interhemispheric AHT. Using constraints on the invariance of the total (ocean+atmosphere) heat transport we suggest possible bounds on fluctuations of the OHT and AMOC during Heinrich Stadial 1. We also explore ITCZ shifts in models and

  6. Radar rainfall estimation for the identification of debris-flow precipitation thresholds

    Science.gov (United States)

    Marra, Francesco; Nikolopoulos, Efthymios I.; Creutin, Jean-Dominique; Borga, Marco

    2014-05-01

    Identification of rainfall thresholds for the prediction of debris-flow occurrence is a common approach for warning procedures. Traditionally the debris-flow triggering rainfall is derived from the closest available raingauge. However, the spatial and temporal variability of intense rainfall on mountainous areas, where debris flows take place, may lead to large uncertainty in point-based estimates. Nikolopoulos et al. (2014) have shown that this uncertainty translates into a systematic underestimation of the rainfall thresholds, leading to a step degradation of the performances of the rainfall threshold for identification of debris flows occurrence under operational conditions. A potential solution to this limitation lies on use of rainfall estimates from weather radar. Thanks to their high spatial and temporal resolutions, these estimates offer the advantage of providing rainfall information over the actual debris flow location. The aim of this study is to analyze the value of radar precipitation estimations for the identification of debris flow precipitation thresholds. Seven rainfall events that triggered debris flows in the Adige river basin (Eastern Italian Alps) are analyzed using data from a dense raingauge network and a C-Band weather radar. Radar data are elaborated by using a set of correction algorithms specifically developed for weather radar rainfall application in mountainous areas. Rainfall thresholds for the triggering of debris flows are identified in the form of average intensity-duration power law curves using a frequentist approach by using both radar rainfall estimates and raingauge data. Sampling uncertainty associated to the derivation of the thresholds is assessed by using a bootstrap technique (Peruccacci et al. 2012). Results show that radar-based rainfall thresholds are largely exceeding those obtained by using raingauge data. Moreover, the differences between the two thresholds may be related to the spatial characteristics (i.e., spatial

  7. Quantitative estimates of the volatility of ambient organic aerosol

    Directory of Open Access Journals (Sweden)

    C. D. Cappa

    2010-01-01

    Full Text Available Measurements of the sensitivity of organic aerosol (OA, and its components mass to changes in temperature were recently reported by Huffman et al. (2009 using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets" are determined using several assumptions as to the enthalpy of vaporization (ΔHvap. We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions, on the order of 50–80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol and lowest for the high (ΔHvap = 150 kJ/mol assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009 has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the high and variable ΔHvap assumptions. Our

  8. Analysis of TRMM 3-Hourly Multi-Satellite Precipitation Estimates Computed in Both Real and Post-Real Time

    Science.gov (United States)

    Huffman, George J.; Adler, Robert F.; Stocker, Erich; Bolvin, David T.; Nelkin, Eric J.

    2002-01-01

    Satellite data form the core of the information available for estimating precipitation on a global basis. While it is possible to create such estimates solely from one sensor, researchers have increasingly moved to using combinations of sensors in an attempt to improve accuracy, coverage, and resolution. This poster updates a long-term project in which the authors are working to provide routine combined-sensor estimates of precipitation over the entire globe at relatively fine time and space intervals. The goal is to produce these globally complete precipitation estimates on a 25-km grid every 3 hours. Since late January 2002 we have been estimating precipitation for the latitude band 50 degrees N-S within about 6 hours of observation time. This work is 1 of only 2 or 3 such efforts in the world. Now we are preparing to provide similar estimates for the last 5 years. All of this work is being carried out as part of the Tropical Rainfall Measuring Mission (TRMM). Initially, TRMM was focused on providing excellent long-term averages of precipitation in tropical regions, but since its launch in November 1997 continued research has allowed the same satellite and data system to be used for addressing weather-scale problems as well.

  9. Densified GPS Estimates of Integrated Precipitable Water Vapor Improve Weather Forecasting during the North American Monsoon

    Science.gov (United States)

    Moore, A. W.; Small, I.; Gutman, S. I.; Bock, Y.; Dumas, J.; Haase, J. S.; Laber, J. L.

    2013-12-01

    Continuous GPS (CGPS) stations for observing crustal motion in the western U.S. now number more than 1200, with over 500 of them operating in real time. Tropospheric wet delay from real-time processing of the GPS data, along with co-located or nearby surface and temperature measurements, are being operationally converted to Integrated Precipitable Water Vapor (IPW) for evaluation as a forecasting tool (Gutman, 2011). The available density of real-time GPS in southern California now allows us to explore usage of densified GPS IPW in operational weather forecasting during weather conditions involving moisture extremes. Under a NASA Advanced Information Systems Technology (AIST) project, 27 southern California stations have been added to the NOAA GPS-Met observing network providing 30-minute estimates of IPW for ingestion into operational NOAA weather models, as well as for direct use by National Weather Service forecasters in monitoring developing weather conditions. The densified network proved advantageous in the 2013 North American Monsoon season, allowing forecasters to visualize rapid moisture increases at intervals between model runs and radiosonde observations and assisting in flood watch/warning decisions. We discuss the observed relationship between IPW and onset of precipitation in monsoon events in southern California and possibilities for additional decision support tools for forecasters.

  10. Probabilistic Quantitative Precipitation Forecasting using a Two-Stage Spatial Model

    Science.gov (United States)

    2008-04-08

    decision via a truncation; the second process drives precipitation amounts via an anamorphosis or transformation function (Chilès and Delfiner 1999, p...spatially varying anamorphosis or transformation function (Chilès and Delfiner 1999, p. 381). The anamorphosis has the advantage of retaining the appro... Delfiner , P. (1999), Geostatistics: Modeling Spatial Uncertainty, Wiley, 695 pp. Diebold, F. X., Gunther, T. A., and Tay, A. S. (1998), “Evaluating density

  11. Quantitative Analysis on Carbide Precipitation in V-Ti Microalloyed TRIP Steel Containing Aluminum

    Directory of Open Access Journals (Sweden)

    Fu Shiyu

    2016-01-01

    Full Text Available Introducing fine precipitates is an important way to enhance the properties of transformation-induced plasticity (TRIP steels. In present work, two V-Ti microalloyed TRIP steels containing aluminum with different content were compared. The average size, size distribution and numbers of vanadium-titanium carbides in samples cold rolled, quenched after being held at 800°C and quenched after intercritical annealing at 800°C and being held at bainitic isothermal transformation temperature of 400°C were investigated by using the technique of carbon extraction replica, twin jet chemical polishing thinning and transmission electron microscopy. The carbides were identified to be (Ti,VC precipitates in steel A and VC in steel B respectively, precipitated mainly from ferrites grains. The average equivalent radius was 3~6nm. Comparison of the experimental results in A and B steel revealed low carbon diffusion rate caused by aluminum inhibited the coarsening of vanadium-titanium carbides. The experimental results also showed that VC carbides dissolution occurred during the intercritical annealing at 800°C.

  12. Quantitative precipitation and streamflow forecast for two recent extreme hydro-meteorological events in Southern Italy with a fully-coupled model system

    Science.gov (United States)

    Mendicino, Giuseppe; Senatore, Alfonso

    2016-04-01

    Two severe hydro-meteorological events affected Calabria Region (Southern Italy) in the second half of the year 2015. The first event, on August 12th, focused on a relatively small area near the northern Ionian coast, resulted in a rainfall intensity of about 230 mm in 24 hours involving flash flooding with several million Euros of damages. The second event mainly affected the southern Ionian coast, was more persistent (it lasted from October 30th to November 2nd), interested a wider area and led to recorded rainfall values up to 400 mm in 24 hours and 700 mm in 48 hours, resulting in severe flooding, landslides and a human loss. The fully two-way dynamically coupled atmosphere-hydrology modeling system WRF-Hydro is used to reproduce both the events, in order to assess its skill in forecasting both quantitative precipitation and streamflow with initial and lateral atmospheric boundary conditions given by the recently available 0.25° output resolution GFS grid dataset. Precipitation estimates provided by 2 km-resolution atmospheric model are compared with both ground-based data and observations from a National Civil Protection Department single-polarization Doppler radar. Discharge data from the rivers and creeks affected by heavy precipitation are not available, then streamflow results are compared with either official discharge estimates provided by authorities (first event) or recorded river stages (second event). Results show good performances of the fully-coupled hydrometeorological prediction system which allows an improved representation of the coupled atmospheric and terrestrial processes and provides an integrated solution for the regional water cycle modeling, from atmospheric processes to river outlets.

  13. Quantifying uncertainty in modelled estimates of annual maximum precipitation: confidence intervals

    Science.gov (United States)

    Panagoulia, Dionysia; Economou, Polychronis; Caroni, Chrys

    2016-04-01

    The possible nonstationarity of the GEV distribution fitted to annual maximum precipitation under climate change is a topic of active investigation. Of particular significance is how best to construct confidence intervals for items of interest arising from stationary/nonstationary GEV models.We are usually not only interested in parameter estimates but also in quantiles of the GEV distribution and it might be expected that estimates of extreme upper quantiles are far from being normally distributed even for moderate sample sizes.Therefore, we consider constructing confidence intervals for all quantities of interest by bootstrap methods based on resampling techniques. To this end, we examined three bootstrapping approaches to constructing confidence intervals for parameters and quantiles: random-t resampling, fixed-t resampling and the parametric bootstrap. Each approach was used in combination with the normal approximation method, percentile method, basic bootstrap method and bias-corrected method for constructing confidence intervals. We found that all the confidence intervals for the stationary model parameters have similar coverage and mean length. Confidence intervals for the more extreme quantiles tend to become very wide for all bootstrap methods. For nonstationary GEV models with linear time dependence of location or log-linear time dependence of scale, confidence interval coverage probabilities are reasonably accurate for the parameters. For the extreme percentiles, the bias-corrected and accelerated method is best overall, and the fixed-t method also has good average coverage probabilities. Reference: Panagoulia D., Economou P. and Caroni C., Stationary and non-stationary GEV modeling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25 (1), 29-43, 2014.

  14. Using damage data to estimate the risk from summer convective precipitation extremes

    Science.gov (United States)

    Schroeer, Katharina; Tye, Mari

    2017-04-01

    model to test whether the relationship between extreme rainfall events and damages is robust enough to estimate a potential underrepresentation of high intensity rainfall events in ungauged areas. Risk-relevant factors of socio-economic vulnerability, land cover, streamflow data, and weather type information are included to improve and sharpen the analysis. Within this study, we first aim to identify which rainfall events are most damaging and which factors affect the damages - seen as a proxy for the vulnerability - related to summer convective rainfall extremes in different catchment types. Secondly, we aim to detect potentially unreported damaging rainfall events and estimate the likelihood of such cases. We anticipate this damage perspective on summertime extreme convective precipitation to be beneficial for risk assessment, uncertainty management, and decision making with respect to weather and climate extremes on the regional-to-local level.

  15. Relevance of the correlation between precipitation and the 0 °C isothermal altitude for extreme flood estimation

    Science.gov (United States)

    Zeimetz, Fraenz; Schaefli, Bettina; Artigue, Guillaume; García Hernández, Javier; Schleiss, Anton J.

    2017-08-01

    Extreme floods are commonly estimated with the help of design storms and hydrological models. In this paper, we propose a new method to take into account the relationship between precipitation intensity (P) and air temperature (T) to account for potential snow accumulation and melt processes during the elaboration of design storms. The proposed method is based on a detailed analysis of this P-T relationship in the Swiss Alps. The region, no upper precipitation intensity limit is detectable for increasing temperature. However, a relationship between the highest measured temperature before a precipitation event and the duration of the subsequent event could be identified. An explanation for this relationship is proposed here based on the temperature gradient measured before the precipitation events. The relevance of these results is discussed for an example of Probable Maximum Precipitation-Probable Maximum Flood (PMP-PMF) estimation for the high mountainous Mattmark dam catchment in the Swiss Alps. The proposed method to associate a critical air temperature to a PMP is easily transposable to similar alpine settings where meteorological soundings as well as ground temperature and precipitation measurements are available. In the future, the analyses presented here might be further refined by distinguishing between precipitation event types (frontal versus orographic).

  16. Assessing the role of uncertain precipitation estimates on the robustness of hydrological model parameters under highly variable climate conditions

    Directory of Open Access Journals (Sweden)

    B. Bisselink

    2016-12-01

    New hydrological insights: Results indicate large discrepancies in terms of the linear correlation (r, bias (β and variability (γ between the observed and simulated streamflows when using different precipitation estimates as model input. The best model performance was obtained with products which ingest gauge data for bias correction. However, catchment behavior was difficult to be captured using a single parameter set and to obtain a single robust parameter set for each catchment, which indicate that transposing model parameters should be carried out with caution. Model parameters depend on the precipitation characteristics of the calibration period and should therefore only be used in target periods with similar precipitation characteristics (wet/dry.

  17. Assessment of extreme quantitative precipitation forecasts and development of regional extreme event thresholds using data from HMT-2006 and COOP observers

    Science.gov (United States)

    Ralph, F.M.; Sukovich, E.; Reynolds, D.; Dettinger, M.; Weagle, S.; Clark, W.; Neiman, P.J.

    2010-01-01

    Extreme precipitation events, and the quantitative precipitation forecasts (QPFs) associated with them, are examined. The study uses data from the Hydrometeorology Testbed (HMT), which conducted its first field study in California during the 2005/06 cool season. National Weather Service River Forecast Center (NWS RFC) gridded QPFs for 24-h periods at 24-h (day 1), 48-h (day 2), and 72-h (day 3) forecast lead times plus 24-h quantitative precipitation estimates (QPEs) fromsites in California (CA) and Oregon-Washington (OR-WA) are used. During the 172-day period studied, some sites received more than 254 cm (100 in.) of precipitation. The winter season produced many extreme precipitation events, including 90 instances when a site received more than 7.6 cm (3.0 in.) of precipitation in 24 h (i.e., an "event") and 17 events that exceeded 12.7 cm (24 h)-1 [5.0 in. (24 h)-1]. For the 90 extreme events f.7.6 cm (24 h)-1 [3.0 in. (24 h)-1]g, almost 90% of all the 270 QPFs (days 1-3) were biased low, increasingly so with greater lead time. Of the 17 observed events exceeding 12.7 cm (24 h)-1 [5.0 in. (24 h)-1], only 1 of those events was predicted to be that extreme. Almost all of the extreme events correlated with the presence of atmospheric river conditions. Total seasonal QPF biases for all events fi.e., $0.025 cm (24 h)-1 [0.01 in. (24 h)-1]g were sensitive to local geography and were generally biased low in the California-Nevada River Forecast Center (CNRFC) region and high in the Northwest River Forecast Center(NWRFC) domain. The low bias in CA QPFs improved with shorter forecast lead time and worsened for extreme events. Differences were also noted between the CNRFC and NWRFC in terms of QPF and the frequency of extreme events. A key finding from this study is that there were more precipitation events .7.6 cm (24 h)-1 [3.0 in. (24 h)21] in CA than in OR-WA. Examination of 422 Cooperative Observer Program (COOP) sites in the NWRFC domain and 400 in the CNRFC domain

  18. Numerical Research on Effects Upon Precipitation Forecast of Doppler-Radar Estimated Precipitation and Retrieved Wind Field Under Different Model Initial Schemes

    Institute of Scientific and Technical Information of China (English)

    WANG Yehong; ZHAO Yuchun; CUI Chunguang

    2007-01-01

    On the basis of the joint estimated 1-h precipitation from Changde, Jingzhou, and Yichang Doppler radars as well as Wuhan digital radar, and the retrieved wind fields from Yichang and Jingzhou Doppler radars, a series of numerical experiments with an advanced regional η-coordinate model (AREM) under different model initial schemes, i.e., Grapes-3DVAR, Barnes objective analysis, and Barnes-3DVAR, are carried out for a torrential rain process occurring along the Yangtze River in the 24-h period from 2000 BT 22 July 2002 to investigate the effects of the Doppler-radar estimated rainfall and retrieved winds on the rainfall forecast. The main results are as follows: (1) The simulations are obviously different under three initial schemes with the same data source (the radiosounding and T213L31 analysis). On the whole,Barnes-3DVAR, which combines the advantages of the Barnes objective analysis and the Grapes-3DVAR method, gives the best simulations: well-simulated rain band and clear mesoscale structures, as well as their location and intensity close to observations. (2) Both Barnes-3DVAR and Grapes-3DVAR schemes are able to assimilate the Doppler-radar estimated rainfall and retrieved winds, but differences in simulation results are very large, with Barnes-3DVAR's simulation much better than Grapes-3DVAR's. (3) Under Grapes3DVAR scheme, the simulation of 24-h rainfall is improved obviously when assimilating the Doppler-radar estimated precipitation into the model in compared with the control experiment; but it becomes a little worse when assimilating the Doppler-radar retrieved winds into the model, and it becomes worse obviously when assimilating the Doppler-radar estimated precipitation as well as retrieved winds into the model. However,the simulation is different under Barnes-3DVAR scheme. The simulation is improved to a certain degree no matter assimilating the estimated precipitation or retrieved winds, or both of them. The result is the best when assimilating both

  19. New estimates of tropical temperature and precipitation changes during the last 42ka

    Science.gov (United States)

    Grauel, A.; Hodell, D. A.; Bernasconi, S. M.; Correa-Metrio, A.

    2013-12-01

    The amount of cooling in the tropics during the last Ice Age has been a longstanding problem with large discrepancies between terrestrial and marine estimates. Here we present a reconstruction of temperature and precipitation changes over the last 42ka from a lake sediment core from Lake Petén Itzá, Guatemala, located at 17°N in lowland Central America. Previous studies of sediment cores from Lake Petén Itzá showed that alternating layers of clay- and gypsum-rich sediment reflect times of wetter and dryer conditions, respectively. The most arid conditions coincide with stadials, especially those associated with Heinrich events (HEs) when pollen assemblages are dominated by xeric-tolerant taxa. In contrast, interstadials and the last glacial maximum (LGM) are characterized by clay deposition and pollen indicative of temperate pine-oak forest, indicating more humid conditions in the lowland Neotropics. We compared three independent methods to reconstruct glacial temperatures: tandem measurements of δ18O in biogenic carbonate and gypsum hydration water, clumped isotope thermometry, and pollen-based temperature estimates using the Modern Analog Technique (MAT). The temperatures derived by the three methods generally agree during interstadials and some stadials (e.g., HE2 and 3), but diverge during other stadial events (e.g., HE1 and 4). For example, gypsum hydration and clumped isotope methods indicate a severe cooling of 6 to 10°C during HE1 and 4, whereas the pollen MAT suggests more moderate cooling of 3 to 6 °C. The reason for this divergence is likely that no modern analogs exist for the pollen assemblage during these cold, arid stadials when the MAT is not applicable. Although the temperature decrease is similar (6-10°C) for HE1 and 4, deuterium excess is distinctly different (-19 and -14, respectively), perhaps indicating a change in source and/or seasonality of precipitation. The δ18O and δD of the lake water indicate HE1 was the most arid

  20. Precipitable Water Vapor Estimates in the Australian Region from Ground-Based GPS Observations

    Directory of Open Access Journals (Sweden)

    Suelynn Choy

    2015-01-01

    Full Text Available We present a comparison of atmospheric precipitable water vapor (PWV derived from ground-based global positioning system (GPS receiver with traditional radiosonde measurement and very long baseline interferometry (VLBI technique for a five-year period (2008–2012 using Australian GPS stations. These stations were selectively chosen to provide a representative regional distribution of sites while ensuring conventional meteorological observations were available. Good agreement of PWV estimates was found between GPS and VLBI comparison with a mean difference of less than 1 mm and standard deviation of 3.5 mm and a mean difference and standard deviation of 0.1 mm and 4.0 mm, respectively, between GPS and radiosonde measurements. Systematic errors have also been discovered during the course of this study, which highlights the benefit of using GPS as a supplementary atmospheric PWV sensor and calibration system. The selected eight GPS sites sample different climates across Australia covering an area of approximately 30° NS/EW. It has also shown that the magnitude and variation of PWV estimates depend on the amount of moisture in the atmosphere, which is a function of season, topography, and other regional climate conditions.

  1. Rainfall-runoff modelling using different estimators of precipitation data in the Carpathian mountain catchments (South Poland)

    Science.gov (United States)

    Kasina, Michal; Ziemski, Michal; Niedbala, Jerzy; Malota, Agnieszka

    2013-04-01

    Precipitation observations are an essential element of flood forecasting systems. Rain gauges, radars, satellite sensors and forecasts from high resolution numerical weather prediction models are a part of precipitation monitoring networks. These networks collect rainfall data that are further provided to hydrological models to produce forecasts. The main goal of this work is to assess the usage of different precipitation data sources in rainfall-runoff modelling with reference to Flash Flood Early Warning System. STUDY AREA Research was carried out in the upper parts of the Sola and Raba river catchments. Both of the rivers begin their course in the southern part of the Western Beskids (Outer Eastern Carpathians; southern Poland). For the purpose of this study, both rivers are taken to comprise the catchments upstream of the gauging stations at Zywiec (Sola) and Stroza (Raba). The upper Sola river catchment encompasses an area of 785 sq. km with an altitude ranging from 342 to 1236 m above sea level, while the Raba river catchment occupies an area of 644 sq. km with an altitude ranging from 300 to 1266 m above sea level. The catchments are underlain mainly by flysch sediments. The average annual amount of precipitation for the Sola River catchment is between 750 and 1300 mm and for the Raba river catchment is in the range of 800-1000 mm. METHODS AND RESULTS This work assesses the sensitivity of a lumped hydrological model DHI's Nedbør-Afrstrømnings-Model (NAM) to different sources of rainfall estimates: rain gauges, radar and satellite as well as predicted precipitation amount from high resolution numerical weather prediction models (e.g. ALADIN). The main steps of validation procedure are: i) comparison of rain gauge data with other precipitation data sources, ii) calibration of the hydrological model (using historical, long time series of rain gauge data treated as "ground truth"), iii) validation using different precipitation data sources as an input, iii

  2. Observational estimates of detrainment and entrainment in non-precipitating shallow cumulus

    Science.gov (United States)

    Norgren, M. S.; Small, J. D.; Jonsson, H. H.; Chuang, P. Y.

    2016-01-01

    Vertical transport associated with cumulus clouds is important to the redistribution of gases, particles, and energy, with subsequent consequences for many aspects of the climate system. Previous studies have suggested that detrainment from clouds can be comparable to the updraft mass flux, and thus represents an important contribution to vertical transport. In this study, we describe a new method to deduce the amounts of gross detrainment and entrainment experienced by non-precipitating cumulus clouds using aircraft observations. The method utilizes equations for three conserved variables: cloud mass, total water, and moist static energy. Optimizing these three equations leads to estimates of the mass fractions of adiabatic mixed-layer air, entrained air and detrained air that the sampled cloud has experienced. The method is applied to six flights of the CIRPAS Twin Otter during the Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS) which took place in the Houston, Texas region during the summer of 2006 during which 176 small, non-precipitating cumuli were sampled. Using our novel method, we find that, on average, these clouds were comprised of 30 to 70 % mixed-layer air, with entrained air comprising most of the remainder. The mass fraction of detrained air was usually very small, less than 2 %, although values larger than 10 % were found in 15 % of clouds. Entrained and detrained air mass fractions both increased with altitude, consistent with some previous observational studies. The largest detrainment events were almost all associated with air that was at their level of neutral buoyancy, which has been hypothesized in previous modeling studies. This new method could be readily used with data from other previous aircraft campaigns to expand our understanding of detrainment for a variety of cloud systems.

  3. Study and Tests of Improved Rain Estimates from the TRMM Precipitation Radar.

    Science.gov (United States)

    Ferreira, Franck; Amayenc, Paul; Oury, Stéphane; Testud, Jacques

    2001-11-01

    Rain rate R estimation from the 2A-25 profiling algorithm of the Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) is analyzed in two ways. Standard results from the operating version-5 algorithm are compared with those from the previous version 4. Also, various adjustments of the involved rain relationships in version 4 are explored, which leads to the proposal of two alternatives to the standard rain rate (Rstd-V4). The first one, (RN0), is based on N(0-scaled relations exploiting the concept of normalized -shaped drop size distributions; the second one, (RkR), relies on using constant R-k instead of constant R-Z relation as in the standard, where Z is reflectivity and k is attenuation coefficient. Error analysis points out a lower sensitivity of the alternative estimates to errors in radar calibration, or initial relations, than the standard. Results from a set of PR data, over ocean and land, show that the version-4 alternatives, and version-5 standard (Rstd-V5), produce more rain than the version-4 standard, which may correct for some reported underestimation. These approaches are tested via point-to-point comparisons of 3D PR-derived Z and R fields (versions 4 and 5) with `reference' fields derived from airborne dual-beam radar on board a National Oceanic and Atmospheric Administration P3-42 aircraft in Hurricanes Bonnie and Brett, for good cases of TRMM overpasses over the ocean. In the comparison domains, Bonnie is dominated by stratiform rain, and Brett includes convective and stratiform rain. In stratiform rain, the mean difference in Z, accounting for different frequencies and scanning geometries of both radars, lies within the uncertainty margin of residual errors in the radar calibrations. Also, the PR mean rain-rate estimates, RkR and Rstd-V5, agree fairly well with the P3 estimate, RP3, whereas Rstd-V4 and RN0 respectively underestimate and overestimate RP3. In convective rain (Brett case), the PR estimates of Z and R largely exceed

  4. Estimation of Phase Delay due to Precipitable Water for Dinsarbased Land Deformation Monitoring

    Science.gov (United States)

    Susaki, J.; Maeda, N.; Akatsuka, S.

    2017-09-01

    In this paper, we present a method for using the estimated precipitable water (PW) to mitigate atmospheric phase delay in order to improve the accuracy of land-deformation assessment with differential interferometric synthetic aperture radar (DInSAR). The phase difference obtained from multi-temporal synthetic aperture radar images contains errors of several types, and the atmospheric phase delay can be an obstacle to estimating surface subsidence. In this study, we calculate PW from external meteorological data. Firstly, we interpolate the data with regard to their spatial and temporal resolutions. Then, assuming a range direction between a target pixel and the sensor, we derive the cumulative amount of differential PW at the height of the slant range vector at pixels along that direction. The atmospheric phase delay of each interferogram is acquired by taking a residual after a preliminary determination of the linear deformation velocity and digital elevation model (DEM) error, and by applying high-pass temporal and low-pass spatial filters. Next, we estimate a regression model that connects the cumulative amount of PW and the atmospheric phase delay. Finally, we subtract the contribution of the atmospheric phase delay from the phase difference of the interferogram, and determine the linear deformation velocity and DEM error. The experimental results show a consistent relationship between the cumulative amount of differential PW and the atmospheric phase delay. An improvement in land-deformation accuracy is observed at a point at which the deformation is relatively large. Although further investigation is necessary, we conclude at this stage that the proposed approach has the potential to improve the accuracy of the DInSAR technique.

  5. ESTIMATION OF PHASE DELAY DUE TO PRECIPITABLE WATER FOR DINSARBASED LAND DEFORMATION MONITORING

    Directory of Open Access Journals (Sweden)

    J. Susaki

    2017-09-01

    Full Text Available In this paper, we present a method for using the estimated precipitable water (PW to mitigate atmospheric phase delay in order to improve the accuracy of land-deformation assessment with differential interferometric synthetic aperture radar (DInSAR. The phase difference obtained from multi-temporal synthetic aperture radar images contains errors of several types, and the atmospheric phase delay can be an obstacle to estimating surface subsidence. In this study, we calculate PW from external meteorological data. Firstly, we interpolate the data with regard to their spatial and temporal resolutions. Then, assuming a range direction between a target pixel and the sensor, we derive the cumulative amount of differential PW at the height of the slant range vector at pixels along that direction. The atmospheric phase delay of each interferogram is acquired by taking a residual after a preliminary determination of the linear deformation velocity and digital elevation model (DEM error, and by applying high-pass temporal and low-pass spatial filters. Next, we estimate a regression model that connects the cumulative amount of PW and the atmospheric phase delay. Finally, we subtract the contribution of the atmospheric phase delay from the phase difference of the interferogram, and determine the linear deformation velocity and DEM error. The experimental results show a consistent relationship between the cumulative amount of differential PW and the atmospheric phase delay. An improvement in land-deformation accuracy is observed at a point at which the deformation is relatively large. Although further investigation is necessary, we conclude at this stage that the proposed approach has the potential to improve the accuracy of the DInSAR technique.

  6. Estimation of Drainage and Evapotranspiration from Time Series of Soil Moisture, Potential Evaporation, and Precipitation

    Science.gov (United States)

    Salvucci, G. D.; Gioioso, M.

    2003-12-01

    A previous study demonstrated that the dependence of soil water outflow on soil moisture can be estimated by averaging precipitation conditioned on soil moisture. The methodology is non parametric and relies only on the assumed stationarity of the soil moisture time series. Here we present a method for partitioning out the evapotranspiration component of total outflow. One goal is to structure the model with as few assumptions about model form as possible. for example we set evapotranspiration efficiency to increases monotonically with moisture and to be concave down, while the net drainage (capillary rise to or percolation from the root zone) is made to depend on moisture in a concave upward fashion. The functions used to represent these behavior are piecewise continuous polynomials or line segments. After generating a set of feasible partitions using a linear programming technique, we evaluate the relative likelihood of each by estimating the entropy of the time series of soil water storage that results from integrating the fluxes. We show that the entropy of the series is proportional to the likelihood that the increments that make it up come from a stationary process, and use this as a basis for model selection. We also estimate the growth of variance of the time series, and decompose this into an equilibrium process (that saturates with time due to a negative correlation among increments) and an error process which (for white noise model, measurement and sampling errors) leads to a random walk term. A unique feature of the method is that it does not fit model predictions to soil moisture, but instead evaluates the stationarity of the running series of soil water storage values implied by the partitioning. Because of this feature the method can be driven with indices of soil moisture (like brightness temperatures) rather than site-specific water contents.

  7. Quantitative reconstruction of summer precipitation using a mid-Holocene δ13C common millet record from Guanzhong Basin, northern China

    Science.gov (United States)

    Yang, Qing; Li, Xiaoqiang; Zhou, Xinying; Zhao, Keliang; Sun, Nan

    2016-12-01

    To quantitatively reconstruct Holocene precipitation for particular geographical areas, suitable proxies and faithful dating controls are required. The fossilized seeds of common millet (Panicum miliaceum) are found throughout the sedimentary strata of northern China and are suited to the production of quantitative Holocene precipitation reconstructions: their isotopic carbon composition (δ13C) gives a measure of the precipitation required during the growing season of summer (here the interval from mid-June to September) and allows these seeds to be dated. We therefore used a regression function, as part of a systematic study of the δ13C of common millet, to produce a quantitative reconstruction of mid-Holocene summer precipitation in the Guanzhong Basin (107°40'-107°49' E, 33°39'-34°45' N). Our results showed that mean summer precipitation at 7.7-3.4 ka BP was 353 mm, ˜ 50 mm or 17 % higher than present levels, and the variability increased, especially after 5.2 ka BP. Maximum mean summer precipitation peaked at 414 mm during the period 6.1-5.5 ka BP, ˜ 109 mm (or 36 %) higher than today, indicating that the East Asian summer monsoon (EASM) peaked at this time. This work can provide a new proxy for further research into continuous paleoprecipitation sequences and the variability of summer precipitation, which will promote the further research into the relation between early human activity and environmental change.

  8. Quantitative CT: technique dependence of volume estimation on pulmonary nodules.

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-07

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  9. Quantitative estimation of sampling uncertainties for mycotoxins in cereal shipments.

    Science.gov (United States)

    Bourgeois, F S; Lyman, G J

    2012-01-01

    Many countries receive shipments of bulk cereals from primary producers. There is a volume of work that is on-going that seeks to arrive at appropriate standards for the quality of the shipments and the means to assess the shipments as they are out-loaded. Of concern are mycotoxin and heavy metal levels, pesticide and herbicide residue levels, and contamination by genetically modified organisms (GMOs). As the ability to quantify these contaminants improves through improved analytical techniques, the sampling methodologies applied to the shipments must also keep pace to ensure that the uncertainties attached to the sampling procedures do not overwhelm the analytical uncertainties. There is a need to understand and quantify sampling uncertainties under varying conditions of contamination. The analysis required is statistical and is challenging as the nature of the distribution of contaminants within a shipment is not well understood; very limited data exist. Limited work has been undertaken to quantify the variability of the contaminant concentrations in the flow of grain coming from a ship and the impact that this has on the variance of sampling. Relatively recent work by Paoletti et al. in 2006 [Paoletti C, Heissenberger A, Mazzara M, Larcher S, Grazioli E, Corbisier P, Hess N, Berben G, Lübeck PS, De Loose M, et al. 2006. Kernel lot distribution assessment (KeLDA): a study on the distribution of GMO in large soybean shipments. Eur Food Res Tech. 224:129-139] provides some insight into the variation in GMO concentrations in soybeans on cargo out-turn. Paoletti et al. analysed the data using correlogram analysis with the objective of quantifying the sampling uncertainty (variance) that attaches to the final cargo analysis, but this is only one possible means of quantifying sampling uncertainty. It is possible that in many cases the levels of contamination passing the sampler on out-loading are essentially random, negating the value of variographic quantitation of

  10. Estimation of Thermodynamic and Dynamic Contribution on Regional Precipitation Intensity and Frequency Changes under Global Warming

    Science.gov (United States)

    Chen, C.-A.; Chou, C.; Chen, C.-T.

    2012-04-01

    From global point of view, an increased tendency of mean precipitation, which is associated with a shift toward more intense and extreme precipitation, has been found in observations and global warming simulations. However, changes in regional precipitation might be different due to contributions of thermodynamic and dynamic components. It implies that changes in regional rainfall intensity and frequency, which is connected to regional mean precipitation changes, should be more complicated under global warming. To understand how regional intensity and frequency will change under global warming, the global warming simulations from the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project phase 3 (CMIP3) multimodel dataset in the A1B scenario were examined in this study. Over regions with increased mean precipitation, positive precipitation anomaly is usually contributed by more frequent heavy rain and enhanced rainfall intensity, even though there are less light rain events in the future. On the other hand, over regions with decreased mean precipitation, negative precipitation anomaly is associated with decreases in frequency for almost every rain events and weakened rainfall intensity, even though there are more very heavy and light rain events. The thermodynamic component is uniform in different regions, and tends to enhance precipitation frequency and intensity, while the dynamic component varies with regions, and can either enhance or reduce precipitation frequency and intensity.

  11. The Contribution Of Sampling Errors In Satellite Precipitation Estimates To High Flood Uncertainty In Subtropical South America

    Science.gov (United States)

    Demaria, E. M.; Valdes, J. B.; Nijssen, B.; Rodriguez, D.; Su, F.

    2009-12-01

    Satellite precipitation estimates are becoming increasingly available at temporal and spatial scales of interest for hydrological applications. Unfortunately precipitation estimated from global satellites is prone to errors hailing from different sources. The impact of sampling errors on the hydrological cycle of a large-size basin was assessed with a macroscale hydrological model. Synthetic precipitation fields were generated in a Monte Carlo fashion by perturbing observed precipitation fields with sampling errors. Three sampling intervals were chosen to generate the precipitation fields: one-hour, three-hours which is the canonical Global Precipitation Mission (GPM) sampling interval, and six-hours. The Variable Infiltration Capacity (VIC) model was used to assess the impact of sampling errors on hydrological fluxes and states in the Iguazu basin in South America for the period 1982-2005. The propagation of sampling errors through the hydrological cycle was evaluated for high flow events that have the 2% chance of being exceeded in any given time. Results show that observed event volumes are underestimated for small volumes for the three and six-hours sampling intervals but for the one-hour sampling interval the difference is almost negligible.The timing of the hydrograph is not affected by uncertainty existent in satellite-derived precipitation when it propagates through the hydrological cycle. Results of two non-parametric tests: the Kruskal-Wallis test on the mean ranks of the population and the Ansari-Bradley test on the equality of the variances indicate that sampling errors do no affect the occurrence of high flows since their probability distribution is not affected. The applicability of these results is limited to a humid climate. However the Iguazu basin is representative of several basins located in subtropical regions around the world, many of which are under-instrumented catchments, where satellite precipitation might be one of the few available data

  12. Methodology significantly affects genome size estimates: quantitative evidence using bryophytes.

    Science.gov (United States)

    Bainard, Jillian D; Fazekas, Aron J; Newmaster, Steven G

    2010-08-01

    Flow cytometry (FCM) is commonly used to determine plant genome size estimates. Methodology has improved and changed during the past three decades, and researchers are encouraged to optimize protocols for their specific application. However, this step is typically omitted or undescribed in the current plant genome size literature, and this omission could have serious consequences for the genome size estimates obtained. Using four bryophyte species (Brachythecium velutinum, Fissidens taxifolius, Hedwigia ciliata, and Thuidium minutulum), three methodological approaches to the use of FCM in plant genome size estimation were tested. These included nine different buffers (Baranyi's, de Laat's, Galbraith's, General Purpose, LB01, MgSO(4), Otto's, Tris.MgCl(2), and Woody Plant), seven propidium iodide (PI) staining periods (5, 10, 15, 20, 45, 60, and 120 min), and six PI concentrations (10, 25, 50, 100, 150, and 200 microg ml(-1)). Buffer, staining period and staining concentration all had a statistically significant effect (P = 0.05) on the genome size estimates obtained for all four species. Buffer choice and PI concentration had the greatest effect, altering the 1C-values by as much as 8% and 14%, respectively. As well, the quality of the data varied with the different methodology used. Using the methodology determined to be the most accurate in this study (LB01 buffer and PI staining for 20 min at 150 microg ml(-1)), three new genome size estimates were obtained: B. velutinum: 0.46 pg, H. ciliata: 0.30 pg, and T. minutulum: 0.46 pg. While the peak quality of flow cytometry histograms is important, researchers must consider that changes in methodology can also affect the relative peak positions and therefore the genome size estimates obtained for plants using FCM.

  13. Introducing uncertainty of radar-rainfall estimates to the verification of mesoscale model precipitation forecasts

    Directory of Open Access Journals (Sweden)

    M. P. Mittermaier

    2008-05-01

    Full Text Available A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used.

    The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.

  14. New estimations of precipitation and surface sublimation in East Antarctica from snow accumulation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Frezzotti, Massimo; Gragnani, Roberto; Proposito, Marco [l' Energia e l' Ambiente, ' Progetto Clima Globale' , Ente per le Nuove Tecnologie, Rome (Italy); Pourchet, Michel; Gay, Michel; Vincent, Christian; Fily, Michel [CNRS, Laboratoire de Glaciologie et Geophysique de l' Environnement, Saint Martin d' Heres (France); Flora, Onelio [University of Trieste, Dipartimento di Scienze Geologiche, Ambientali e Marine, Trieste (Italy); Gandolfi, Stefano [University of Bologna, Dipartimento di Ingegneria delle Strutture, dei Trasporti, delle Acque, del Rilevamento, del Territorio, Bologna (Italy); Urbini, Stefano [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); Becagli, Silvia; Severi, Mirko; Traversi, Rita; Udisti, Roberto [University of Florence, Dipartimento di Chimica, Florence (Italy)

    2004-12-01

    Surface mass balance (SMB) distribution and its temporal and spatial variability is an essential input parameter in mass balance studies. Different methods were used, compared and integrated (stake farms, ice cores, snow radar, surface morphology, remote sensing) at eight sites along a transect from Terra Nova Bay (TNB) to Dome C (DC) (East Antarctica), to provide detailed information on the SMB. Spatial variability measurements show that the measured maximum snow accumulation (SA) in a 15 km area is well correlated to firn temperature. Wind-driven sublimation processes, controlled by the surface slope in the wind direction, have a huge impact (up to 85% of snow precipitation) on SMB and are significant in terms of past, present and future SMB evaluations. The snow redistribution process is local and has a strong impact on the annual variability of accumulation. The spatial variability of SMB at the kilometre scale is one order of magnitude higher than its temporal variability (20-30%) at the centennial time scale. This high spatial variability is due to wind-driven sublimation. Compared with our SMB calculations, previous compilations generally over-estimate SMB, up to 65% in some areas. (orig.)

  15. Introducing uncertainty of radar-rainfall estimates to the verification of mesoscale model precipitation forecasts

    Science.gov (United States)

    Mittermaier, M. P.

    2008-05-01

    A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP) verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS) and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used. The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.

  16. Estimation of extreme daily precipitation: comparison between regional and geostatistical approaches.

    Science.gov (United States)

    Hellies, Matteo; Deidda, Roberto; Langousis, Andreas

    2016-04-01

    We study the extreme rainfall regime of the Island of Sardinia in Italy, based on annual maxima of daily precipitation. The statistical analysis is conducted using 229 daily rainfall records with at least 50 complete years of observations, collected at different sites by the Hydrological Survey of the Sardinia Region. Preliminary analysis, and the L-skewness and L-kurtosis diagrams, show that the Generalized Extreme Value (GEV) distribution model performs best in describing daily rainfall extremes. The GEV distribution parameters are estimated using the method of Probability Weighted Moments (PWM). To obtain extreme rainfall estimates at ungauged sites, while minimizing uncertainties due to sampling variability, a regional and a geostatistical approach are compared. The regional approach merges information from different gauged sites, within homogeneous regions, to obtain GEV parameter estimates at ungauged locations. The geostatistical approach infers the parameters of the GEV distribution model at locations where measurements are available, and then spatially interpolates them over the study region. In both approaches we use local rainfall means as index-rainfall. In the regional approach we define homogeneous regions by applying a hierarchical cluster analysis based on Ward's method, with L-moment ratios (i.e. L-CV and L-Skewness) as metrics. The analysis results in four contiguous regions, which satisfy the Hosking and Wallis (1997) homogeneity tests. The latter have been conducted using a Monte-Carlo approach based on a 4-parameter Kappa distribution model, fitted to each station cluster. Note that the 4-parameter Kappa model includes the GEV distribution as a sub-case, when the fourth parameter h is set to 0. In the geostatistical approach we apply kriging for uncertain data (KUD), which accounts for the error variance in local parameter estimation and, therefore, may serve as a useful tool for spatial interpolation of metrics affected by high uncertainty. In

  17. The impact of uncertain precipitation data on insurance loss estimates using a Flood Catastrophe Model

    Directory of Open Access Journals (Sweden)

    C. C. Sampson

    2014-01-01

    Full Text Available Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and re-insurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge corrected rainfall radar, meteorological re-analysis data (ERA-Interim and a satellite rainfall product (CMORPH. Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find these loss estimates to be highly sensitive to uncertainties propagated from the driving observational datasets, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.

  18. The Centiloid Project: Standardizing Quantitative Amyloid Plaque Estimation by PET

    Science.gov (United States)

    Klunk, William E.; Koeppe, Robert A.; Price, Julie C.; Benzinger, Tammie; Devous, Michael D.; Jagust, William; Johnson, Keith; Mathis, Chester A.; Minhas, Davneet; Pontecorvo, Michael J.; Rowe, Christopher C.; Skovronsky, Daniel; Mintun, Mark

    2014-01-01

    Although amyloid imaging with PiB-PET, and now with F-18-labelled tracers, has produced remarkably consistent qualitative findings across a large number of centers, there has been considerable variability in the exact numbers reported as quantitative outcome measures of tracer retention. In some cases this is as trivial as the choice of units, in some cases it is scanner dependent, and of course, different tracers yield different numbers. Our working group was formed to standardize quantitative amyloid imaging measures by scaling the outcome of each particular analysis method or tracer to a 0 to 100 scale, anchored by young controls (≤45 years) and typical Alzheimer’s disease patients. The units of this scale have been named “Centiloids.” Basically, we describe a “standard” method of analyzing PiB PET data and then a method for scaling any “non-standard” method of PiB PET analysis (or any other tracer) to the Centiloid scale. PMID:25443857

  19. On-line estimation of the dissolved zinc concentration during ZnS precipitation in a CSTR

    NARCIS (Netherlands)

    Grootscholten, T.I.M.; Keesman, K.J.; Lens, P.N.L.

    2007-01-01

    Abstract In this paper a method is presented to estimate the reaction term of zinc sulphide precipitation and the zinc concentration in a CSTR, using the read-out signal of a sulphide selective electrode. The reaction between zinc and sulphide is described by a non-linear model and therefore classic

  20. On-line estimation of the dissolved zinc concentration during ZnS precipitation in a CSTR

    NARCIS (Netherlands)

    Grootscholten, T.I.M.; Keesman, K.J.; Lens, P.N.L.

    2007-01-01

    Abstract In this paper a method is presented to estimate the reaction term of zinc sulphide precipitation and the zinc concentration in a CSTR, using the read-out signal of a sulphide selective electrode. The reaction between zinc and sulphide is described by a non-linear model and therefore classic

  1. Where Does the Irrigation Water Go? An Estimate of the Contribution of Irrigation to Precipitation Using MERRA

    Science.gov (United States)

    Wei, Jiangfeng; Dirmeyer, Paul A.; Wisser, Dominik; Bosilovich, Michael G.; Mocko, David M.

    2013-01-01

    Irrigation is an important human activity that may impact local and regional climate, but current climate model simulations and data assimilation systems generally do not explicitly include it. The European Centre for Medium-Range Weather Forecasts (ECMWF) Interim Re-Analysis (ERA-Interim) shows more irrigation signal in surface evapotranspiration (ET) than the Modern-Era Retrospective Analysis for Research and Applications (MERRA) because ERA-Interim adjusts soil moisture according to the observed surface temperature and humidity while MERRA has no explicit consideration of irrigation at the surface. But, when compared with the results from a hydrological model with detailed considerations of agriculture, the ET from both reanalyses show large deficiencies in capturing the impact of irrigation. Here, a back-trajectory method is used to estimate the contribution of irrigation to precipitation over local and surrounding regions, using MERRA with observation-based corrections and added irrigation-caused ET increase from the hydrological model. Results show substantial contributions of irrigation to precipitation over heavily irrigated regions in Asia, but the precipitation increase is much less than the ET increase over most areas, indicating that irrigation could lead to water deficits over these regions. For the same increase in ET, precipitation increases are larger over wetter areas where convection is more easily triggered, but the percentage increase in precipitation is similar for different areas. There are substantial regional differences in the patterns of irrigation impact, but, for all the studied regions, the highest percentage contribution to precipitation is over local land.

  2. Modelling and on-line estimation of zinc sulphide precipitation in

    NARCIS (Netherlands)

    Grootscholten, T.I.M.; Keesman, K.J.; Lens, P.N.L.

    2008-01-01

    In this paper the ZnS precipitation in a continuously stirred tank reactor (CSTR) is modelled using mass balances. The dynamics analysis of the model reveals that the ZnS precipitation shows a two time-scales behaviour with inherent numerical stability problems, which therefore needs special attenti

  3. Modelling and on-line estimation of zinc sulphide precipitation in

    NARCIS (Netherlands)

    Grootscholten, T.I.M.; Keesman, K.J.; Lens, P.N.L.

    2008-01-01

    In this paper the ZnS precipitation in a continuously stirred tank reactor (CSTR) is modelled using mass balances. The dynamics analysis of the model reveals that the ZnS precipitation shows a two time-scales behaviour with inherent numerical stability problems, which therefore needs special attenti

  4. Quantitative reconstruction of precipitation and runoff during MIS 5a, MIS 3a, and Holocene, arid China

    Science.gov (United States)

    Liu, Yuan; Li, Yu

    2016-09-01

    Marine oxygen isotope stage 5a (MIS 5a), MIS 3a, and Holocene were highlighted periods in paleoclimate studies. Many scientists have published a great number of studies in this regard, but they paid more attention to qualitative research, and there was often a lack of quantitative data. In this paper, based on chronological evidence from a paleolake in arid China, MIS 5a, MIS 3a, and Holocene lake area, the precipitation of the drainage area and the runoff of the inflowing rivers of the lake were reconstructed with ArcGIS spatial analysis software and the improved water and energy balance model which was calibrated by modern meteorological and hydrological data in the Shiyang River drainage basin. The results showed that the paleolake areas were 1824, 1124, and 628 km2 for MIS 5a, MIS 3a, and Holocene; meanwhile, the paleoprecipitation and runoff were 293.992-297.433, 271.105-274.294, and 249.431-252.373 mm and 29.103 × 108-29.496 × 108, 18.810 × 108-18.959 × 108, and 10.637 × 108-10.777 × 108 mm, respectively. The quantitative data can help us not only strengthen the understanding of paleoclimatic characteristics but also recognize the complexity and diversity of the climate system.

  5. Validation of NASA-TRMM MPA Precipitation Estimates During Tropical Storms Using Gauge and Radar-Based Estimates

    Science.gov (United States)

    Henschke, A. E.; Habib, E.

    2008-05-01

    The purpose of this study is the validation of the 3B42 and 3B42-RT rainfall products from NASA's Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) during major tropical rainfall events throughout the state of Louisiana. The 3B42-RT product, a near real time dataset, and the 3B42 product, a gauge calibrated dataset, are available at .25° x .25°, 3-hourly resolution, covering the globe from 50°N latitude to 50°S latitude. In order to investigate the validity of the TMPA data, radar-based and rain gauge datasets were used as reference. The radar-based dataset, a product of the NWS Stage IV multi- sensor precipitation estimation (MPE) algorithm, is available at 1-hourly intervals on a 4km x 4km spatial scale. The rain gauge dataset was obtained on an hourly scale from a national gauge network maintained by the National Climatic Data Center (NCDC). During the study, six tropical storm periods between 2002 and 2005, ranging in length from three to five days, were examined (Hurricane Lili, October 2002; Tropical Storm Bill, June 2003; Hurricane Ivan, September 2004; Tropical Storm Matthew, October 2004; Hurricane Katrina, August 2005; and Hurricane Rita, September 2005). During the analyzed storms, the radar and rain gauge data were averaged spatially and temporally to match the resolution of the TMPA pixels. The number of pixels studied during each storm varied from three to six pixels, with a minimum requirement of three gauges per 3B42 pixel, depending on the gauge density at the landfall location of the storm. Evaluation of the 3B42/3B42-RT error was performed on a storm by storm basis as well as an overall accumulation of data from all six storms using error metrics including the relative mean difference, relative standard deviation, correlation coefficient, and probability of detection. Significant variability in the performance metrics were observed between the different analyzed storms. Enhanced performance in terms of

  6. Precipitation estimates from MSG SEVIRI daytime, night-time and twilight data with random forests

    Science.gov (United States)

    Kühnlein, Meike; Appelhans, Tim; Thies, Boris; Nauss, Thomas

    2014-05-01

    We introduce a new rainfall retrieval technique based on MSG SEVIRI data which aims to retrieve rainfall rates in a continuous manner (day, twilight and night) at high temporal resolution. Due to the deficiencies of existing optical rainfall retrievals, the focus of this technique is on assigning rainfall rates to precipitating cloud areas in connection with extra-tropical cyclones in mid-latitudes including both convective and advective-stratiform precipitating cloud areas. The technique is realized in three steps: (i) Precipitating cloud areas are identified. (ii) The precipitating cloud areas are separated into convective and advective-stratiform precipitating areas. (iii) Rainfall rates are assigned to the convective and advective-stratiform precipitating areas, respectively. Therefore, considering the dominant precipitation processes of convective and advective-stratiform precipitation areas within extra-tropical cyclones, satellite-based information on the cloud top height, cloud top temperature, cloud phase and cloud water path are used to retrieve information about precipitation. The approach uses the ensemble classification and regression technique random forests to develop the prediction algorithms. Random forest models contain a combination of characteristics that make them well suited for its application in precipitation remote sensing. One of the key advantages is the ability to capture non-linear association of patterns between predictors and response which becomes important when dealing with complex non-linear events like precipitation. Using a machine learning approach differentiates the proposed technique from most state-of-the-art satellite-based rainfall retrievals which generally use conventional parametric approaches. To train and validate the model, the radar-based RADOLAN RW product from the German Weather Service (DWD) is used which provides area-wide gauge-adjusted hourly precipitation information. Beside the overall performance of the

  7. A Quantitative Model to Estimate Drug Resistance in Pathogens

    Directory of Open Access Journals (Sweden)

    Frazier N. Baker

    2016-12-01

    Full Text Available Pneumocystis pneumonia (PCP is an opportunistic infection that occurs in humans and other mammals with debilitated immune systems. These infections are caused by fungi in the genus Pneumocystis, which are not susceptible to standard antifungal agents. Despite decades of research and drug development, the primary treatment and prophylaxis for PCP remains a combination of trimethoprim (TMP and sulfamethoxazole (SMX that targets two enzymes in folic acid biosynthesis, dihydrofolate reductase (DHFR and dihydropteroate synthase (DHPS, respectively. There is growing evidence of emerging resistance by Pneumocystis jirovecii (the species that infects humans to TMP-SMX associated with mutations in the targeted enzymes. In the present study, we report the development of an accurate quantitative model to predict changes in the binding affinity of inhibitors (Ki, IC50 to the mutated proteins. The model is based on evolutionary information and amino acid covariance analysis. Predicted changes in binding affinity upon mutations highly correlate with the experimentally measured data. While trained on Pneumocystis jirovecii DHFR/TMP data, the model shows similar or better performance when evaluated on the resistance data for a different inhibitor of PjDFHR, another drug/target pair (PjDHPS/SMX and another organism (Staphylococcus aureus DHFR/TMP. Therefore, we anticipate that the developed prediction model will be useful in the evaluation of possible resistance of the newly sequenced variants of the pathogen and can be extended to other drug targets and organisms.

  8. Spatial estimation of mean temperature and precipitation in areas of scarce meteorological information

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, J.D. [Universidad Autonoma Chapingo, Chapingo (Mexico)]. E-mail: dgomez@correo.chapingo.mx; Etchevers, J.D. [Instituto de Recursos Naturales, Colegio de Postgraduados, Montecillo, Edo. de Mexico (Mexico); Monterroso, A.I. [departamento de Suelos, Universidad Autonoma Chapingo, Chapingo (Mexico); Gay, G. [Centro de Ciencias de la Atmosfera, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Campo, J. [Instituto de Ecologia, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Martinez, M. [Instituto de Recursos Naturales, Montecillo, Edo. de Mexico (Mexico)

    2008-01-15

    In regions of complex relief and scarce meteorological information it becomes difficult to implement techniques and models of numerical interpolation to elaborate reliable maps of climatic variables essential for the study of natural resources using the new tools of the geographic information systems. This paper presents a method for estimating annual and monthly mean values of temperature and precipitation, taking elements from simple interpolation methods and complementing them with some characteristics of more sophisticated methods. To determine temperature, simple linear regression equations were generated associating temperature with altitude of weather stations in the study region, which had been previously subdivided in accordance with humidity conditions and then applying such equations to the area's digital elevation model to obtain temperatures. The estimation of precipitation was based on the graphic method through the analysis of the meteorological systems that affect the regions of the study area throughout the year and considering the influence of mountain ridges on the movement of prevailing winds. Weather stations with data in nearby regions were analyzed according to their position in the landscape, exposure to humid winds, and false color associated with vegetation types. Weather station sites were used to reference the amount of rainfall; interpolation was attained using analogies with satellite images of false color to which a model of digital elevation was incorporated to find similar conditions within the study area. [Spanish] En las regiones de relieve complejo y con escasa informacion meteorologica se dificulta la aplicacion de las diferentes tecnicas y modelos de interpolacion numericos para elaborar mapas de variables climaticas confiables, indispensables para realizar estudios de los recursos naturales, con la utilizacion de las nuevas herramientas de los sistemas de informacion geografica. En este trabajo se presenta un metodo para

  9. Investigation of Atmospheric Modelling Framework for Better Reconstruction on Historical Extreme Precipitation Event in PMP Estimation

    Science.gov (United States)

    Chen, X.; Hossain, F.; Leung, L. R.

    2015-12-01

    During May 1-2, 2010, a record-breaking storm hit Nashville, and caused huge humanity and societal loss. It raises the importance of forecasting/reconstructing these types of extreme weather systems once again, in the meanwhile providing an excellent case for such atmospheric modelling studies. However, earlier studies suggest that successful reconstruction of this event depends on and is sensitive to a number of model options, making it difficult to establish a better model framework with more confidence. In this study we employed the Weather Research and Forecast (WRF) model to investigate how this extreme precipitation event is sensitive to the model configuration, and identified options that would produce better results. We tested several combinations of modelling grid sizes together with initial/boundary conditions (IC/BC). At different grid sizes, we conducted a set of tests on various combinations of microphysics (Morrison, new Thompson and WSM5) and cumulus process (Kain-Fristch, Grell-Devenyi and Grell-Freitas) parameterization schemes. The model results were intensively evaluated under bias analysis as well as other metrics (probability of detection, bias, false alerts, HSS, ETS). The evaluation suggests that in general, simulation results benefit from finer model grids (5km). At 5km level, NCEP2 or NAM IC/BCs are more representative for the 2010 Nashville storm. There are no universally good parameterization schemes, but the WSM5 microphysics scheme, Kain-Fristch and Grell-Freitas cumulus schemes are recommended over other tested schemes. These better schemes would help to make better estimation of PMP in the region.

  10. Quantitative measurement of precipitation using radar in comparison with ground-level measurements, taking orographic influences into account; Quantitative Niederschlagsmessung mit Radar im Vergleich mit Bodenmessungen in orographisch gegliedertem Gelaende

    Energy Technology Data Exchange (ETDEWEB)

    Gysi, H. [Radar-Info, Karlsruhe (Germany)

    1998-01-01

    The methods of correction applied to the determination of the spatial distribution of precipitation on the basis of the volumes established by the Karlsruhe C-band precipitation radar distinctly enhance the quality of statements regarding precipitation intensities and their time integration both in summer and winter. (orig./KW) [Deutsch] Die fuer die Bestimmung der raeumlichen Niederschlagsverteilung aus Volumendaten des Karlsruher C-Band Niederschlagradars angewandten Korrekturverfahren verbessern sowohl im Sommer als auch im Winter deutlich die Qualitaet und quantitative Aussagekraft der dargestellten Niederschlagsintensitaeten und deren zeitlichen Integrationen. (orig./KW)

  11. Estimation of effects of quantitative trait loci in large complex pedigrees

    NARCIS (Netherlands)

    Meuwissen, T.H.E.; Goddard, M.E.

    1997-01-01

    A method was derived to estimate effects of quantitative trait loci (QTL) using incomplete genotype information in large outbreeding populations with complex pedigrees. The method accounts for background genes by estimating polygenic effects. The basic equations used are very similar to the usual li

  12. AI-based (ANN and SVM) statistical downscaling methods for precipitation estimation under climate change scenarios

    Science.gov (United States)

    Mehrvand, Masoud; Baghanam, Aida Hosseini; Razzaghzadeh, Zahra; Nourani, Vahid

    2017-04-01

    Since statistical downscaling methods are the most largely used models to study hydrologic impact studies under climate change scenarios, nonlinear regression models known as Artificial Intelligence (AI)-based models such as Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been used to spatially downscale the precipitation outputs of Global Climate Models (GCMs). The study has been carried out using GCM and station data over GCM grid points located around the Peace-Tampa Bay watershed weather stations. Before downscaling with AI-based model, correlation coefficient values have been computed between a few selected large-scale predictor variables and local scale predictands to select the most effective predictors. The selected predictors are then assessed considering grid location for the site in question. In order to increase AI-based downscaling model accuracy pre-processing has been developed on precipitation time series. In this way, the precipitation data derived from various GCM data analyzed thoroughly to find the highest value of correlation coefficient between GCM-based historical data and station precipitation data. Both GCM and station precipitation time series have been assessed by comparing mean and variances over specific intervals. Results indicated that there is similar trend between GCM and station precipitation data; however station data has non-stationary time series while GCM data does not. Finally AI-based downscaling model have been applied to several GCMs with selected predictors by targeting local precipitation time series as predictand. The consequences of recent step have been used to produce multiple ensembles of downscaled AI-based models.

  13. Estimate of Precipitation from the Dual-Beam Airborne Radars in TOGA COARE. Part II: Precipitation Efficiency in the 9 February 1993 MCS.

    Science.gov (United States)

    Oury, Stéphane; Dou, Xiankang; Testud, Jacques

    2000-12-01

    Dual-beam airborne Doppler radars are commonly used in convection experiments for their ability to describe the dynamical structure of weather systems. However, instrumental limitations impose the use of wavelengths such as X-band, which are largely attenuated through heavy rain.This paper is the second of a series of two, which aim at developing schemes for attenuation correction. The authors' final objective is to improve the estimation of precipitation sampled from airborne radars. The first paper was dealing with the application of `differential algorithms' (`stereoradar' and `quad beam') to the independent retrieval of the specific attenuation and nonattenuated reflectivity, which shed some light on the physics of the precipitation. This second paper develops a more extensive procedure based upon the hybridization of a `differential' and an `integral' algorithm. It is much more flexible than the methods proposed in part one and allows full rainfall-rate retrievals in single aircraft experiments. This procedure is applied to the 9 February mesoscale convective system (MCS) study case from Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE), and the impact of the reflectivity correction on the water budget at the cloud system scale is discussed.As expected, the production of water in the 9 February squall line is maximum below the freezing level and is located in the updraft resulting from the interaction between the warm inflow and rear-to-front cold flow. The authors' analysis shows that the precipitation efficiency in the convective region of the system is 31%. Therefore, the large majority of water vapor condensed into cloud droplets and ice crystals does not immediately reach the surface as precipitation. It travels toward the rear of the system at the speed of the horizontal air motion, which suggests a large contribution of the stratiform area in the global water budget. The same calculation performed using raw

  14. Probable maximum precipitation 24 hours estimation: A case study of Zanjan province of Iran

    Directory of Open Access Journals (Sweden)

    Azim Shirdeli

    2012-10-01

    Full Text Available One of the primary concerns in designing civil structures such as water storage dams and irrigation and drainage networks is to find economic scale based on possibility of natural incidents such as floods, earthquake, etc. Probable maximum precipitation (PMP is one of well known methods, which helps design a civil structure, properly. In this paper, we study the maximum one-day precipitation using 17 to 50 years of information in 13 stations located in province of Zanjan, Iran. The proposed study of this paper uses two Hershfield methods, where the first one yields 18.17 to 18.48 for precipitation where the PMP24 was between 170.14 mm and 255.28 mm. The second method reports precipitation between 2.29 and 4.95 while PMP24 was between 62.33 mm and 92.08 mm. In addition, when the out of range data were deleted from the study of the second method, precipitation rates were calculated between 2.29 and 4.31 while PMP24 was between 76.08 mm and 117.28 mm. The preliminary results indicate that the second Hershfield method provide more stable results than the first one.

  15. Does GPM-based multi-satellite precipitation enhance rainfall estimates over Pakistan and Bolivia arid regions?

    Science.gov (United States)

    Hussain, Y.; Satgé, F.; Bonnet, M. P.; Pillco, R.; Molina, J.; Timouk, F.; Roig, H.; Martinez-Carvajal, H., Sr.; Gulraiz, A.

    2016-12-01

    Arid regions are sensitive to rainfall variations which are expressed in the form of flooding and droughts. Unfortunately, those regions are poorly monitored and high quality rainfall estimates are still needed. The Global Precipitation Measurement (GPM) mission released two new satellite rainfall products named Integrated Multisatellite Retrievals GPM (IMERG) and Global Satellite Mapping of Precipitation version 6 (GSMaP-v6) bringing the possibility of accurate rainfall monitoring over these countries. This study assessed both products at monthly scale over Pakistan considering dry and wet season over the 4 main climatic zones from 2014 to 2016. With similar climatic conditions, the Altiplano region of Bolivia is considered to quantify the influence of big lakes (Titicaca and Poopó) in rainfall estimates. For comparison, the widely used TRMM-Multisatellite Precipitation Analysis 3B43 (TMPA-3B43) version 7 is also involved in the analysis to observe the potential enhancement in rainfall estimate brought by GPM products. Rainfall estimates derived from 110 rain-gauges are used as reference to compare IMERG, GSMaP-v6 and TMPA-3B43 at the 0.1° and 0.25° spatial resolution. Over both regions, IMERG and GSMaP-v6 capture the spatial pattern of precipitation as well as TMPA-3B43. All products tend to over estimates rainfall over very arid regions. This feature is even more marked during dry season. However, during this season, both reference and estimated rainfall remain very low and do not impact seasonal water budget computation. On a general way, IMERG slightly outperforms TMPA-3B43 and GSMaP-v6 which provides the less accurate rainfall estimate. The TMPA-3B43 rainfall underestimation previously found over Lake Titicaca is still observed in IMERG estimates. However, GSMaP-v6 considerably decreases the underestimation providing the most accurate rainfall estimate over the lake. MOD11C3 Land Surface Temperature (LST) and ASTER Global Emissivity Dataset reveal strong

  16. Improving quantitative precipitation nowcasting with a local ensemble transform Kalman filter radar data assimilation system: observing system simulation experiments

    Directory of Open Access Journals (Sweden)

    Chih-Chien Tsai

    2014-03-01

    Full Text Available This study develops a Doppler radar data assimilation system, which couples the local ensemble transform Kalman filter with the Weather Research and Forecasting model. The benefits of this system to quantitative precipitation nowcasting (QPN are evaluated with observing system simulation experiments on Typhoon Morakot (2009, which brought record-breaking rainfall and extensive damage to central and southern Taiwan. The results indicate that the assimilation of radial velocity and reflectivity observations improves the three-dimensional winds and rain-mixing ratio most significantly because of the direct relations in the observation operator. The patterns of spiral rainbands become more consistent between different ensemble members after radar data assimilation. The rainfall intensity and distribution during the 6-hour deterministic nowcast are also improved, especially for the first 3 hours. The nowcasts with and without radar data assimilation have similar evolution trends driven by synoptic-scale conditions. Furthermore, we carry out a series of sensitivity experiments to develop proper assimilation strategies, in which a mixed localisation method is proposed for the first time and found to give further QPN improvement in this typhoon case.

  17. Predictive Uncertainty Estimation on a Precipitation and Temperature Reanalysis Ensemble for Shigar Basin, Central Karakoram

    Directory of Open Access Journals (Sweden)

    Paolo Reggiani

    2016-06-01

    Full Text Available The Upper Indus Basin (UIB and the Karakoram Range are the subject of ongoing hydro-glaciological studies to investigate possible glacier mass balance shifts due to climatic change. Because of the high altitude and remote location, the Karakoram Range is difficult to access and, therefore, remains scarcely monitored. In situ precipitation and temperature measurements are only available at valley locations. High-altitude observations exist only for very limited periods. Gridded precipitation and temperature data generated from the spatial interpolation of in situ observations are unreliable for this region because of the extreme topography. Besides satellite measurements, which offer spatial coverage, but underestimate precipitation in this area, atmospheric reanalyses remain one of the few alternatives. Here, we apply a proven approach to quantify the uncertainty associated with an ensemble of monthly precipitation and temperature reanalysis data for 1979–2009 in Shigar Basin, Central Karakoram. A Model-Conditional Processor (MCP of uncertainty is calibrated on precipitation and temperature in situ data measured in the proximity of the study region. An ensemble of independent reanalyses is processed to determine the predictive uncertainty of monthly observations. As to be expected, the informative gain achieved by post-processing temperature reanalyses is considerable, whereas significantly less gain is achieved for precipitation post-processing. The proposed approach indicates a systematic assessment procedure for predictive uncertainty through probabilistic weighting of multiple re-forecasts, which are bias-corrected on ground observations. The approach also supports an educated reconstruction of gap-filling for missing in situ observations.

  18. A model for estimating understory vegetation response to fertilization and precipitation in loblolly pine plantations

    Science.gov (United States)

    Curtis L. VanderSchaaf; Ryan W. McKnight; Thomas R. Fox; H. Lee Allen

    2010-01-01

    A model form is presented, where the model contains regressors selected for inclusion based on biological rationale, to predict how fertilization, precipitation amounts, and overstory stand density affect understory vegetation biomass. Due to time, economic, and logistic constraints, datasets of large sample sizes generally do not exist for understory vegetation. Thus...

  19. Estimates of run off, evaporation and precipitation for the Bay of Bengal on seasonal basis

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Mean seasonal river discharge rates (R) of the major rivers along the east coast of India, Bangla Desh and Burma; evaporation rates (E) computed for 5 degrees lat-long. Squares from data on heat loss and mean yearly precipitation (P) values at 5...

  20. Contributions of Precipitation and Soil Moisture Observations to the Skill of Soil Moisture Estimates in a Land Data Assimilation System

    Science.gov (United States)

    Reichle, Rolf H.; Liu, Qing; Bindlish, Rajat; Cosh, Michael H.; Crow, Wade T.; deJeu, Richard; DeLannoy, Gabrielle J. M.; Huffman, George J.; Jackson, Thomas J.

    2011-01-01

    The contributions of precipitation and soil moisture observations to the skill of soil moisture estimates from a land data assimilation system are assessed. Relative to baseline estimates from the Modern Era Retrospective-analysis for Research and Applications (MERRA), the study investigates soil moisture skill derived from (i) model forcing corrections based on large-scale, gauge- and satellite-based precipitation observations and (ii) assimilation of surface soil moisture retrievals from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E). Soil moisture skill is measured against in situ observations in the continental United States at 44 single-profile sites within the Soil Climate Analysis Network (SCAN) for which skillful AMSR-E retrievals are available and at four CalVal watersheds with high-quality distributed sensor networks that measure soil moisture at the scale of land model and satellite estimates. The average skill (in terms of the anomaly time series correlation coefficient R) of AMSR-E retrievals is R=0.39 versus SCAN and R=0.53 versus CalVal measurements. The skill of MERRA surface and root-zone soil moisture is R=0.42 and R=0.46, respectively, versus SCAN measurements, and MERRA surface moisture skill is R=0.56 versus CalVal measurements. Adding information from either precipitation observations or soil moisture retrievals increases surface soil moisture skill levels by IDDeltaR=0.06-0.08, and root zone soil moisture skill levels by DeltaR=0.05-0.07. Adding information from both sources increases surface soil moisture skill levels by DeltaR=0.13, and root zone soil moisture skill by DeltaR=0.11, demonstrating that precipitation corrections and assimilation of satellite soil moisture retrievals contribute similar and largely independent amounts of information.

  1. Observed and blended gauge-satellite precipitation estimates perspective on meteorological drought intensity over South Sulawesi, Indonesia

    Science.gov (United States)

    Setiawan, A. M.; Koesmaryono, Y.; Faqih, A.; Gunawan, D.

    2017-01-01

    South Sulawesi province as one of the rice production center for national food security are highly influenced by climate phenomenon that lead to drought condition. This paper quantifies meteorological drought based on Standardized Precipitation Index (SPI) recommended by the World Meteorological Organization (WMO) and Consecutive Dry Days (CDD) as one of the extreme indices recommended by the Expert Team on Climate Change Detection and Indices (ETCCDI). The indices were calculated by using (i) quality controlled daily and monthly observational precipitation data from 23 weather stations of various record lengths within 1967-2015 periods, and (ii) 0.05o x 0.05o blended gauge-satellite of daily and monthly precipitation estimates of the Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) dataset. Meteorological drought intensity represented by Average Duration of Drought Intensity (ADI) from three-monthly SPI (SPI3) show spatial differences characteristic between eastern and western region. Observed and CHIRPS have relatively similar perspective on meteorological drought intensity over South Sulawesi. Relatively high values of ADI and longest CDD observed mainly over south western part of study area.

  2. (In)Consistent estimates of changes in relative precipitation in an European domain over the last 350 years

    Science.gov (United States)

    Bothe, Oliver; Wagner, Sebastian; Zorita, Eduardo

    2015-04-01

    How did regional precipitation change in past centuries? We have potentially three sources of information to answer this question: There are, especially for Europe, a number of long records of local station precipitation; documentary records and natural archives of past environmental variability serve as proxy records for empirical reconstructions; in addition, simulations with coupled climate models or Earth System Models provide estimates on the spatial structure of precipitation variability. However, instrumental records rarely extend back to the 18th century, reconstructions include large uncertainties, and simulation skill is often still unsatisfactory for precipitation. Thus, we can only seek to answer to which extent the three sources provide a consistent picture of past regional precipitation changes. This presentation describes the (lack of) consistency in describing changes of the distributional properties of seasonal precipitation between the different data sources. We concentrate on England and Wales since there are two recent reconstructions and a long observation based record available for this domain. The season of interest is an extended spring (March, April, May, June, July, MAMJJ) over the past 350 years. The main simulated data stem from a regional simulation for the European domain with CCLM driven at its lateral boundaries with conditions provided by a MPI-ESM COSMOS simulation for the last millennium using a high-amplitude solar forcing. A number of simulations for the past 1000 years from the Paleoclimate Modelling Intercomparison Project Phase III provide additional information. We fit a Weibull distribution to the available data sets following the approach for calculating standardized precipitation indices. We do so over 51 year moving windows to assess the consistency of changes in the distributional properties. Changes in the percentiles for severe (and extreme) dry or wet conditions and in the Weibull standard deviations of precipitation

  3. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    Science.gov (United States)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that

  4. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models

    DEFF Research Database (Denmark)

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due...... to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon...... the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either...

  5. Combining C- and X-band Weather Radars for Improving Precipitation Estimates over Urban Areas

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk

    The topic of this thesis is weather radar precipitation measurements. Measuring the spatial and temporal variations of the precipitation by weather radars has significant advantages compared to point measurements from rain gauges within urban drainage applications. Knowledge on how the rainfall...... of future system state. Accurate and reliable weather radar measurements are, therefore, important for future developments and achievements within urban drainage. This PhD study investigates two types of weather radars. Both systems are in operational use in Denmark today. A network of meteorological C......-band weather radars provides a basic coverage of almost the entire country. In addition, the larger cities are also covered by small Local Area Weather Radars (LAWR). Whereas the large C-band network is operated and owned by the Danish Meteorological Institute (DMI), the smaller urban radars are operated...

  6. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  7. Quantitative Estimates of the Social Benefits of Learning, 1: Crime. Wider Benefits of Learning Research Report.

    Science.gov (United States)

    Feinstein, Leon

    The cost benefits of lifelong learning in the United Kingdom were estimated, based on quantitative evidence. Between 1975-1996, 43 police force areas in England and Wales were studied to determine the effect of wages on crime. It was found that a 10 percent rise in the average pay of those on low pay reduces the overall area property crime rate by…

  8. A hybrid Bayesian-SVD based method to detect false alarms in PERSIANN precipitation estimation product using related physical parameters

    Science.gov (United States)

    Ghajarnia, Navid; Arasteh, Peyman D.; Araghinejad, Shahab; Liaghat, Majid A.

    2016-07-01

    Incorrect estimation of rainfall occurrence, so called False Alarm (FA) is one of the major sources of bias error of satellite based precipitation estimation products and may even cause lots of problems during the bias reduction and calibration processes. In this paper, a hybrid statistical method is introduced to detect FA events of PERSIANN dataset over Urmia Lake basin in northwest of Iran. The main FA detection model is based on Bayesian theorem at which four predictor parameters including PERSIANN rainfall estimations, brightness temperature (Tb), precipitable water (PW) and near surface air temperature (Tair) is considered as its input dataset. In order to decrease the dimensions of input dataset by summarizing their most important modes of variability and correlations to the reference dataset, a technique named singular value decomposition (SVD) is used. The application of Bayesian-SVD method in FA detection of Urmia Lake basin resulted in a trade-off between FA detection and Hit events loss. The results show success of proposed method in detecting about 30% of FA events in return for loss of about 12% of Hit events while better capability of this method in cold seasons is observed.

  9. Estimation of titers of antibody against Pasteurella multocida in cattle vaccinated with haemorrhagic septicemia alum precipitated vaccine

    Directory of Open Access Journals (Sweden)

    Sabia Qureshi

    2014-04-01

    Full Text Available Aim: The present study was carried out in 100 cattle to assess the antibody response to Haemorrhagic Septicaemia alum precipitated vaccine by Microtiter Agglutination Test (MAT, Indirect Haemaaglutination Assay (IHA and Monoclonal Antibody based Indirect Enzyme Linked Immunosorbent Assay (ELISA. Materials and Methods: One hundred cattle from a local gaushala of Ludhiana were vaccinated with alum precipitated Haemorrhagic Septicaemia vaccine. Serum was collected at 0, 42, 84 and 128 days post immunization and antibody titers at different stages were estimated by MAT, IHA and ELISA. Results: The animals exhibited the classical pattern of humoral immune response with gradual increase and achievement of peak antibody titers plateau by 42DPI and gradual decline by 128 DPI. The IHA titers in cattle were significantly higher (P<0.05 at 42 days post immunization than the corresponding MAT titers on the same day. ELISA titers were significantly higher (P<0.05 than MAT and IHA titers at 42 DPI. IHA was found to be more sensitive than MAT, and the titers were higher by ELISA than by MAT and IHA throughout the observation period. Conclusion: The results indicate that animals vaccinated with commercial alum precipitated HS vaccine could not develop and sustain adequate levels of antibody for long duration.

  10. A comparison of NEXRAD WSR-88D rain estimates with gauge measurements for high and low reflectivity gradient precipitation events.

    Energy Technology Data Exchange (ETDEWEB)

    Jendrowski, P.; Kelly, D. S.; Klazura, G. E.; Thomale, J. M.

    1999-04-14

    Rain gauge measurements were compared with radar-estimated storm total precipitation for 43 rain events that occurred at ten locations. Gauge-to-radar ratios (G/R) were computed for each case. The G/R ratio is strongly related to precipitation type, with the mean G/R slightly less than 1.00 for high-reflectivity gradient cases and greater than 2.00 (factor of 2 radar underestimation) for low-reflectivity gradient cases. both precipitation types indicated radar underestimate at the nearest ranges. However, the high-reflectivity gradient cases indicated radar overestimation at further ranges, while the low-reflectivity gradient cases indicated significant radar underestimation at all ranges. Occurrences of radar overestimates may have been related to high reflectivity returns from melting ice, bright-band effects in stratiform systems and hail from convective systems. Bright-band effects probably were responsible for improving the radar underestimates in the second range interval (50-99.9 km) for the low-reflectivity gradient cases. Other possibilities for radar overestimates are anomalous propagation (AP) of the radar beam. Smith, et al. (1996) concluded that bright band and AP lead to systematic overestimate of rainfall at intermediate ranges.

  11. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models.

    Science.gov (United States)

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.

  12. CMORPH 8 Km: A Method that Produces Global Precipitation Estimates from Passive Microwave and Infrared Data at High Spatial and Temporal Resolution

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A new technique is presented in which half-hourly global precipitation estimates derived from passive microwave satellite scans are propagated by motion vectors...

  13. A method for estimating and removing streaking artifacts in quantitative susceptibility mapping.

    Science.gov (United States)

    Li, Wei; Wang, Nian; Yu, Fang; Han, Hui; Cao, Wei; Romero, Rebecca; Tantiwongkosi, Bundhit; Duong, Timothy Q; Liu, Chunlei

    2015-03-01

    Quantitative susceptibility mapping (QSM) is a novel MRI method for quantifying tissue magnetic property. In the brain, it reflects the molecular composition and microstructure of the local tissue. However, susceptibility maps reconstructed from single-orientation data still suffer from streaking artifacts which obscure structural details and small lesions. We propose and have developed a general method for estimating streaking artifacts and subtracting them from susceptibility maps. Specifically, this method uses a sparse linear equation and least-squares (LSQR)-algorithm-based method to derive an initial estimation of magnetic susceptibility, a fast quantitative susceptibility mapping method to estimate the susceptibility boundaries, and an iterative approach to estimate the susceptibility artifact from ill-conditioned k-space regions only. With a fixed set of parameters for the initial susceptibility estimation and subsequent streaking artifact estimation and removal, the method provides an unbiased estimate of tissue susceptibility with negligible streaking artifacts, as compared to multi-orientation QSM reconstruction. This method allows for improved delineation of white matter lesions in patients with multiple sclerosis and small structures of the human brain with excellent anatomical details. The proposed methodology can be extended to other existing QSM algorithms.

  14. Spatiotemporal variability of modern precipitation δ18O in the central Andes and implications for paleoclimate and paleoaltimetry estimates

    Science.gov (United States)

    Fiorella, Richard P.; Poulsen, Christopher J.; Pillco Zolá, Ramiro S.; Barnes, Jason B.; Tabor, Clay R.; Ehlers, Todd A.

    2015-05-01

    Understanding the patterns of rainfall isotopic composition in the central Andes is hindered by sparse observations. Despite limited observational data, stable isotope tracers have been commonly used to constrain modern-to-ancient Andean atmospheric processes, as well as to reconstruct paleoclimate and paleoaltimetry histories. Here, we present isotopic compositions of precipitation (δ18Op and δDp) from 11 micrometeorological stations located throughout the Bolivian Altiplano and along its eastern flank at ~21.5°S. We collected and isotopically analyzed 293 monthly bulk precipitation samples (August 2008 to April 2013). δ18Op values ranged from -28.0‰ to 9.6‰, with prominent seasonal cycles expressed at all stations. We observed a strong relationship between the δ18Op and elevation, though it varies widely in time and space. Constraints on air sourcing estimated from atmospheric back trajectory calculations indicate that continental-scale climate dynamics control the interannual variability in δ18Op, with upwind precipitation anomalies having the largest effect. The impact of precipitation anomalies in distant air source regions to the central Andes is in turn modulated by the Bolivian High. The importance of the Bolivian High is most clearly observed on the southern Bolivian Altiplano. However, monthly variability among Altiplano stations can exceed 10‰ in δ18Op on the plateau and cannot be explained by elevation or source variability, indicating a nontrivial role for local scale effects on short timescales. The strong influence of atmospheric circulation on central Andean δ18Op requires that paleoclimate and paleoaltimetry studies consider the role of South American atmospheric paleocirculation in their interpretation of stable isotopic values as proxies.

  15. Quantitative Risk reduction estimation Tool For Control Systems, Suggested Approach and Research Needs

    Energy Technology Data Exchange (ETDEWEB)

    Miles McQueen; Wayne Boyer; Mark Flynn; Sam Alessi

    2006-03-01

    For the past year we have applied a variety of risk assessment technologies to evaluate the risk to critical infrastructure from cyber attacks on control systems. More recently, we identified the need for a stand alone control system risk reduction estimation tool to provide owners and operators of control systems with a more useable, reliable, and credible method for managing the risks from cyber attack. Risk is defined as the probability of a successful attack times the value of the resulting loss, typically measured in lives and dollars. Qualitative and ad hoc techniques for measuring risk do not provide sufficient support for cost benefit analyses associated with cyber security mitigation actions. To address the need for better quantitative risk reduction models we surveyed previous quantitative risk assessment research; evaluated currently available tools; developed new quantitative techniques [17] [18]; implemented a prototype analysis tool to demonstrate how such a tool might be used; used the prototype to test a variety of underlying risk calculational engines (e.g. attack tree, attack graph); and identified technical and research needs. We concluded that significant gaps still exist and difficult research problems remain for quantitatively assessing the risk to control system components and networks, but that a useable quantitative risk reduction estimation tool is not beyond reach.

  16. Estimating statistics of European wet and dry spells and associated precipitation extremes - interannual variability and trends

    Science.gov (United States)

    Zolina, O.; Simmer, C.; Belyaev, K.; Gulev, S.; Koltermann, K. P.

    2013-12-01

    Probability distributions of the durations of wet and dry spells were modeled by applying truncated geometric distribution. It has been also extended to the fractional truncated geometric distribution which allows for the discrimination between the roles of a changing number of wet days and of a regrouping of wet and dry days in forming synoptic structure of precipitation. Analyses were performed using 2 collections of daily rain gauge data namely ECA (about 1000 stations) and regional German DWD network (more than 6000 stations) for the period from 1950 to 2009. Wet spells exhibit a statistically significant lengthening over northern Europe and central European Russia, which is especially pronounced in winter when the mean duration of wet periods increased by 15%-20%. In summer wet spells become shorter over Scandinavia and northern Russia. The duration of dry spells decreases over Scandinavia and southern Europe in both winter and summer. Climate tendencies in extreme wet and dry spell durations may not necessarily follow those in mean characteristics. The changing numbers of wet days cannot explain the long-term variability in the duration of wet and dry periods. The observed changes are mainly due to the regrouping of wet and dry days. The tendencies in duration of wet and dry spells have been analyzed for a number of European areas. Over the Netherlands both wet and dry periods are extended in length during the cold and the warm season. A simultaneous shortening of wet and dry periods is found in southern Scandinavia in summer. Over France and central southern Europe during both winter and summer and over the Scandinavian Atlantic coast in summer, opposite tendencies in the duration of wet and dry spells were identified. Growing durations of wet spells are associated with more intense precipitation events while precipitation during shorter wet spells become weaker. Both analyses of relatively coarse resolution ECA data and high resolution DWD station network

  17. Multi-scale Quantitative Precipitation Forecasting Using Nonlinear and Nonstationary Teleconnection Signals and Artificial Neural Network Models

    Science.gov (United States)

    Global sea surface temperature (SST) anomalies can affect terrestrial precipitation via ocean-atmosphere interaction known as climate teleconnection. Non-stationary and non-linear characteristics of the ocean-atmosphere system make the identification of the teleconnection signals...

  18. A simulation study of the recession coefficient for antecedent precipitation index. [soil moisture and water runoff estimation

    Science.gov (United States)

    Choudhury, B. J.; Blanchard, B. J.

    1981-01-01

    The antecedent precipitation index (API) is a useful indicator of soil moisture conditions for watershed runoff calculations and recent attempts to correlate this index with spaceborne microwave observations have been fairly successful. It is shown that the prognostic equation for soil moisture used in some of the atmospheric general circulation models together with Thornthwaite-Mather parameterization of actual evapotranspiration leads to API equations. The recession coefficient for API is found to depend on climatic factors through potential evapotranspiration and on soil texture through the field capacity and the permanent wilting point. Climatologial data for Wisconsin together with a recently developed model for global isolation are used to simulate the annual trend of the recession coefficient. Good quantitative agreement is shown with the observed trend at Fennimore and Colby watersheds in Wisconsin. It is suggested that API could be a unifying vocabulary for watershed and atmospheric general circulation modelars.

  19. Estimating quantitative genetic parameters in wild populations: a comparison of pedigree and genomic approaches.

    Science.gov (United States)

    Bérénos, Camillo; Ellis, Philip A; Pilkington, Jill G; Pemberton, Josephine M

    2014-07-01

    The estimation of quantitative genetic parameters in wild populations is generally limited by the accuracy and completeness of the available pedigree information. Using relatedness at genomewide markers can potentially remove this limitation and lead to less biased and more precise estimates. We estimated heritability, maternal genetic effects and genetic correlations for body size traits in an unmanaged long-term study population of Soay sheep on St Kilda using three increasingly complete and accurate estimates of relatedness: (i) Pedigree 1, using observation-derived maternal links and microsatellite-derived paternal links; (ii) Pedigree 2, using SNP-derived assignment of both maternity and paternity; and (iii) whole-genome relatedness at 37 037 autosomal SNPs. In initial analyses, heritability estimates were strikingly similar for all three methods, while standard errors were systematically lower in analyses based on Pedigree 2 and genomic relatedness. Genetic correlations were generally strong, differed little between the three estimates of relatedness and the standard errors declined only very slightly with improved relatedness information. When partitioning maternal effects into separate genetic and environmental components, maternal genetic effects found in juvenile traits increased substantially across the three relatedness estimates. Heritability declined compared to parallel models where only a maternal environment effect was fitted, suggesting that maternal genetic effects are confounded with direct genetic effects and that more accurate estimates of relatedness were better able to separate maternal genetic effects from direct genetic effects. We found that the heritability captured by SNP markers asymptoted at about half the SNPs available, suggesting that denser marker panels are not necessarily required for precise and unbiased heritability estimates. Finally, we present guidelines for the use of genomic relatedness in future quantitative genetics

  20. ESOLIP – estimate of solid and liquid precipitation at sub-daily time resolution by combining snow height and rain gauge measurements

    Directory of Open Access Journals (Sweden)

    E. Mair

    2013-07-01

    Full Text Available Measuring precipitation in mountain areas is a demanding task, but essential for hydrological and environmental themes. Especially in small Alpine catchments with short hydrological response, precipitation data with high temporal resolution are required for a better understanding of the hydrological cycle. Since most climate/meteorological stations are situated at the easily accessible bottom of valleys, and the few heated rain gauges installed at higher elevation sites are problematic in winter conditions, an accurate quantification of winter (snow precipitation at high elevations remains difficult. However, there are an increasing number of micro-meteorological stations and snow height sensors at high elevation locations in Alpine catchments. To benefit from data of such stations, an improved approach to estimate solid and liquid precipitation (ESOLIP is proposed. ESOLIP allows gathering hourly precipitation data throughout the year by using unheated rain gauge data, careful filtering of snow height sensors as well as standard meteorological data (air temperature, relative humidity, global shortwave radiation, wind speed. ESOLIP was validated at a well-equipped test site in Stubai Valley (Tyrol, Austria, comparing results to winter precipitation measured with a snow pillow and a heated rain gauge. The snow height filtering routine and indicators for possible precipitation were tested at a field site in Matsch Valley (South Tyrol, Italy. Results show a good match with measured data because variable snow density is taken into account, which is important when working with freshly fallen snow. Furthermore, the results show the need for accurate filtering of the noise of the snow height signal and they confirm the unreliability of heated rain gauges for estimating winter precipitation. The described improved precipitation estimate ESOLIP at sub-daily time resolution is helpful for precipitation analysis and for several hydrological applications

  1. EPSAT-SG: a satellite method for precipitation estimation; its concepts and implementation for the AMMA experiment

    Directory of Open Access Journals (Sweden)

    J. C. Bergès

    2010-01-01

    Full Text Available This paper presents a new rainfall estimation method, EPSAT-SG which is a frame for method design. The first implementation has been carried out to meet the requirement of the AMMA database on a West African domain. The rainfall estimation relies on two intermediate products: a rainfall probability and a rainfall potential intensity. The first one is computed from MSG/SEVIRI by a feed forward neural network. First evaluation results show better properties than direct precipitation intensity assessment by geostationary satellite infra-red sensors. The second product can be interpreted as a conditional rainfall intensity and, in the described implementation, it is extracted from GPCP-1dd. Various implementation options are discussed and comparison of this embedded product with 3B42 estimates demonstrates the importance of properly managing the temporal discontinuity. The resulting accumulated rainfall field can be presented as a GPCP downscaling. A validation based on ground data supplied by AGRHYMET (Niamey indicates that the estimation error has been reduced in this process. The described method could be easily adapted to other geographical area and operational environment.

  2. Study on quantile estimates of extreme precipitation and their spatiotemporal consistency adjustment over the Huaihe River basin

    Science.gov (United States)

    Shao, Yuehong; Wu, Junmei; Li, Min

    2017-01-01

    The quantile estimates and spatiotemporal consistency of extreme precipitation are studied by regional linear frequency analysis for Huaihe River basin in China. Firstly, the study area can be categorized into six homogeneous regions by using cluster analysis, heterogeneity measure, and discordancy measure. In the next step, we determine the optimum distribution for each homogeneous region by using two criteria of Monte Carlo simulations and the root-mean-square error (RMSE) of the sample L-moments. A diagram of L-moments ratio is used to further judge and validate the optimum distribution. The generalized extreme value (GEV), generalized normal (GNO), and generalized logistic (GLO) for 24-h duration are determined to be the more appropriate distribution based on the two criteria, L-moments ratio plot, and the tail thickness of curve in adjacent regions. A summary assessment can provide the more reasonable distribution, which avoids arbitrary results from single test. An important practical element of this study that was missing from previous works is the quantile spatiotemporal consistency analysis, which helps identify non-monotonicity among quantiles at different durations and reduces the gradient of estimates in the adjacent regions. Abnormality and spatial discontinuation can be removed by distributing the surplus of the ratio and twice different interpolation. A complete set of spatiotemporal consistent quantile estimates for various duration (24 h, 3 days, 5 days, and 7 days) and return periods (from 2 to 1000 years) can be obtained by using the abovementioned method in the study area, which are in the agreement with the observed precipitation extremes. It will provide important basis for hydrometeorological research, which is of significant scientific and practical merit.

  3. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    Science.gov (United States)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  4. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    Science.gov (United States)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  5. Study on quantile estimates of extreme precipitation and their spatiotemporal consistency adjustment over the Huaihe River basin

    Science.gov (United States)

    Shao, Yuehong; Wu, Junmei; Li, Min

    2016-09-01

    The quantile estimates and spatiotemporal consistency of extreme precipitation are studied by regional linear frequency analysis for Huaihe River basin in China. Firstly, the study area can be categorized into six homogeneous regions by using cluster analysis, heterogeneity measure, and discordancy measure. In the next step, we determine the optimum distribution for each homogeneous region by using two criteria of Monte Carlo simulations and the root-mean-square error (RMSE) of the sample L-moments. A diagram of L-moments ratio is used to further judge and validate the optimum distribution. The generalized extreme value (GEV), generalized normal (GNO), and generalized logistic (GLO) for 24-h duration are determined to be the more appropriate distribution based on the two criteria, L-moments ratio plot, and the tail thickness of curve in adjacent regions. A summary assessment can provide the more reasonable distribution, which avoids arbitrary results from single test. An important practical element of this study that was missing from previous works is the quantile spatiotemporal consistency analysis, which helps identify non-monotonicity among quantiles at different durations and reduces the gradient of estimates in the adjacent regions. Abnormality and spatial discontinuation can be removed by distributing the surplus of the ratio and twice different interpolation. A complete set of spatiotemporal consistent quantile estimates for various duration (24 h, 3 days, 5 days, and 7 days) and return periods (from 2 to 1000 years) can be obtained by using the abovementioned method in the study area, which are in the agreement with the observed precipitation extremes. It will provide important basis for hydrometeorological research, which is of significant scientific and practical merit.

  6. Estimating spatially and temporally varying recharge and runoff from precipitation and urban irrigation in the Los Angeles Basin, California

    Science.gov (United States)

    Hevesi, Joseph A.; Johnson, Tyler D.

    2016-10-17

    A daily precipitation-runoff model, referred to as the Los Angeles Basin watershed model (LABWM), was used to estimate recharge and runoff for a 5,047 square kilometer study area that included the greater Los Angeles area and all surface-water drainages potentially contributing recharge to a 1,450 square kilometer groundwater-study area underlying the greater Los Angeles area, referred to as the Los Angeles groundwater-study area. The recharge estimates for the Los Angeles groundwater-study area included spatially distributed recharge in response to the infiltration of precipitation, runoff, and urban irrigation, as well as mountain-front recharge from surface-water drainages bordering the groundwater-study area. The recharge and runoff estimates incorporated a new method for estimating urban irrigation, consisting of residential and commercial landscape watering, based on land use and the percentage of pervious land area.The LABWM used a 201.17-meter gridded discretization of the study area to represent spatially distributed climate and watershed characteristics affecting the surface and shallow sub-surface hydrology for the Los Angeles groundwater study area. Climate data from a local network of 201 monitoring sites and published maps of 30-year-average monthly precipitation and maximum and minimum air temperature were used to develop the climate inputs for the LABWM. Published maps of land use, land cover, soils, vegetation, and surficial geology were used to represent the physical characteristics of the LABWM area. The LABWM was calibrated to available streamflow records at six streamflow-gaging stations.Model results for a 100-year target-simulation period, from water years 1915 through 2014, were used to quantify and evaluate the spatial and temporal variability of water-budget components, including evapotranspiration (ET), recharge, and runoff. The largest outflow of water from the LABWM was ET; the 100-year average ET rate of 362 millimeters per year (mm

  7. Empirical estimates of size-resolved precipitation scavenging coefficients for ultrafine particles

    Science.gov (United States)

    Pryor, S. C.; Joerger, V. M.; Sullivan, R. C.

    2016-10-01

    Below-cloud scavenging coefficients for ultrafine particles (UFP) exhibit comparatively large uncertainties in part because of the limited availability of observational data sets from which robust parameterizations can be derived or that can be used to evaluate output from numerical models. Long time series of measured near-surface UFP size distributions and precipitation intensity from the Midwestern USA are used here to explore uncertainties in scavenging coefficients and test both the generalizability of a previous empirical parameterization developed using similar data from a boreal forest in Finland (Laakso et al., 2003) and whether a more parsimonious formulation can be developed. Scavenging coefficients (λ) over an ensemble of 95 rain events (with a median intensity of 1.56 mm h-1) and 104 particle diameter (Dp) classes (from 10 to 400 nm) indicate a mean value of 3.4 × 10-5 s-1 (with a standard error of 1.1 × 10-6 s-1) and a median of 1.9 × 10-5 s-1 (interquartile range: -2.0 × 10-5 to 7.5 × 10-5 s-1). The median scavenging coefficients for Dp: 10-400 nm computed over all 95 rain events exhibit close agreement with the empirical parameterization proposed by (Laakso et al., 2003). They decline from ∼4.1 × 10-5 s-1 for Dp of 10-19 nm, to ∼1.6 × 10-5 s-1 for Dp of 80-113 nm, and show an increasing tendency for Dp > 200 nm.

  8. Continuous photometric observations at ENEA base in Lampedusa to estimate precipitable water

    Directory of Open Access Journals (Sweden)

    S. Teggi

    2003-06-01

    Full Text Available Water vapour is a variable component of the atmosphere both in space and time. It is one of the most important components because of its effects in many fi elds: Meteorology, Climatology, Remote Sensing, Energy-Budget, Hydrology, etc. This work compares radiometric (sun photometer readings, Global Positioning System (GPS data and a meteorological model forecasted data. The aim is to understand if GPS measurements may help Numerical Weather Prediction (NWP models. It is well known that GPS measurements are affected by the so-called tropospheric delay. Part of it, the so-called wet delay is related mainly to the amount of water vapour along the path of the GPS signal through the troposphere. Precise knowledge of the abundance of water vapour, in space and time, is important for NWP model because water vapour is the predecessor of precipitation. Despite the high variability of water vapour compared to other meteorological fi elds, like pressure and wind, water vapour observations are scarce, so that additional measurements of water vapour are expected to benefi t meteorology. A new sun photometer, which is part of the AERONET (AErosol and RObotic NETwork program, has been installed at the ENEA (Ente per le Nuove tecnologie, l'Energia e l'Ambiente base of Lampedusa Island. The sun photometer is quite close (less then 4 km to an ASI (Agenzia Spaziale Italiana GPS permanent receiver. A long record (summer period of the year 2000 of sun photometric measurements is available for the station at Lampedusa. We found that the GPS and sun photometric data are better correlated (std. dev. about 10 mm for the wet delay than are the GPS measurements with the NWP model predictions. This is an indication that GPS delay data may contain information useful for weather prediction.

  9. Comparison of the TRMM Precipitation Radar rainfall estimation with ground-based disdrometer and radar measurements in South Greece

    Science.gov (United States)

    Ioannidou, Melina P.; Kalogiros, John A.; Stavrakis, Adrian K.

    2016-11-01

    The performance of the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) rainfall estimation algorithm is assessed, locally, in Crete island, south Greece, using data from a 2D-video disdrometer and a ground-based, X-band, polarimetric radar. A three-parameter, normalized Gamma drop size distribution is fitted to the disdrometer rain spectra; the latter are classified in stratiform and convective rain types characterized by different relations between distribution parameters. The method of moments estimates more accurately the distribution parameters than the best fit technique, which exhibits better agreement with and is more biased by the observed droplet distribution at large diameter values. Power laws between the radar reflectivity factor (Z) and the rainfall rate (R) are derived from the disdrometer data. A significant diversity of the prefactor and the exponent of the estimated power laws is observed, depending on the scattering model and the regression technique. The Z-R relationships derived from the disdrometer data are compared to those obtained from TRMM-PR data. Generally, the power laws estimated from the two datasets are different. Specifically, the greater prefactor found for the disdrometer data suggests an overestimation of rainfall rate by the TRMM-PR algorithm for light and moderate stratiform rain, which was the main rain type in the disdrometer dataset. Finally, contemporary data from the TRMM-PR and a ground-based, X-band, polarimetric radar are analyzed. Comparison of the corresponding surface rain rates for a rain event with convective characteristics indicates a large variability of R in a single TRMM-PR footprint, which typically comprises several hundreds of radar pixels. Thus, the coarse spatial resolution of TRMM-PR may lead to miss of significant high local peaks of convective rain. Also, it was found that the high temporal variability of convective rain may introduce significant errors in the estimation of bias of

  10. Temporal disaggregation of satellite-derived monthly precipitation estimates and the resulting propagation of error in partitioning of water at the land surface

    Directory of Open Access Journals (Sweden)

    S.A. Margulis

    2001-01-01

    Full Text Available Global estimates of precipitation can now be made using data from a combination of geosynchronous and low earth-orbit satellites. However, revisit patterns of polar-orbiting satellites and the need to sample mixed-clouds scenes from geosynchronous satellites leads to the coarsening of the temporal resolution to the monthly scale. There are prohibitive limitations to the applicability of monthly-scale aggregated precipitation estimates in many hydrological applications. The nonlinear and threshold dependencies of surface hydrological processes on precipitation may cause the hydrological response of the surface to vary considerably based on the intermittent temporal structure of the forcing. Therefore, to make the monthly satellite data useful for hydrological applications (i.e. water balance studies, rainfall-runoff modelling, etc., it is necessary to disaggregate the monthly precipitation estimates into shorter time intervals so that they may be used in surface hydrology models. In this study, two simple statistical disaggregation schemes are developed for use with monthly precipitation estimates provided by satellites. The two techniques are shown to perform relatively well in introducing a reasonable temporal structure into the disaggregated time series. An ensemble of disaggregated realisations was routed through two land surface models of varying complexity so that the error propagation that takes place over the course of the month could be characterised. Results suggest that one of the proposed disaggregation schemes can be used in hydrological applications without introducing significant error. Keywords: precipitation, temporal disaggregation, hydrological modelling, error propagation

  11. Estimates of increased black carbon emissions from electrostatic precipitators during powdered activated carbon injection for mercury emissions control.

    Science.gov (United States)

    Clack, Herek L

    2012-07-03

    The behavior of mercury sorbents within electrostatic precipitators (ESPs) is not well-understood, despite a decade or more of full-scale testing. Recent laboratory results suggest that powdered activated carbon exhibits somewhat different collection behavior than fly ash in an ESP and particulate filters located at the outlet of ESPs have shown evidence of powdered activated carbon penetration during full-scale tests of sorbent injection for mercury emissions control. The present analysis considers a range of assumed differential ESP collection efficiencies for powdered activated carbon as compared to fly ash. Estimated emission rates of submicrometer powdered activated carbon are compared to estimated emission rates of particulate carbon on submicrometer fly ash, each corresponding to its respective collection efficiency. To the extent that any emitted powdered activated carbon exhibits size and optical characteristics similar to black carbon, such emissions could effectively constitute an increase in black carbon emissions from coal-based stationary power generation. The results reveal that even for the low injection rates associated with chemically impregnated carbons, submicrometer particulate carbon emissions can easily double if the submicrometer fraction of the native fly ash has a low carbon content. Increasing sorbent injection rates, larger collection efficiency differentials as compared to fly ash, and decreasing sorbent particle size all lead to increases in the estimated submicrometer particulate carbon emissions.

  12. Applications of TRMM-based Multi-Satellite Precipitation Estimation for Global Runoff Simulation: Prototyping a Global Flood Monitoring System

    Science.gov (United States)

    Hong, Yang; Adler, Robert F.; Huffman, George J.; Pierce, Harold

    2008-01-01

    Advances in flood monitoring/forecasting have been constrained by the difficulty in estimating rainfall continuously over space (catchment-, national-, continental-, or even global-scale areas) and flood-relevant time scale. With the recent availability of satellite rainfall estimates at fine time and space resolution, this paper describes a prototype research framework for global flood monitoring by combining real-time satellite observations with a database of global terrestrial characteristics through a hydrologically relevant modeling scheme. Four major components included in the framework are (1) real-time precipitation input from NASA TRMM-based Multi-satellite Precipitation Analysis (TMPA); (2) a central geospatial database to preprocess the land surface characteristics: water divides, slopes, soils, land use, flow directions, flow accumulation, drainage network etc.; (3) a modified distributed hydrological model to convert rainfall to runoff and route the flow through the stream network in order to predict the timing and severity of the flood wave, and (4) an open-access web interface to quickly disseminate flood alerts for potential decision-making. Retrospective simulations for 1998-2006 demonstrate that the Global Flood Monitor (GFM) system performs consistently at both station and catchment levels. The GFM website (experimental version) has been running at near real-time in an effort to offer a cost-effective solution to the ultimate challenge of building natural disaster early warning systems for the data-sparse regions of the world. The interactive GFM website shows close-up maps of the flood risks overlaid on topography/population or integrated with the Google-Earth visualization tool. One additional capability, which extends forecast lead-time by assimilating QPF into the GFM, also will be implemented in the future.

  13. Intercomparison of PERSIANN-CDR and TRMM-3B42V7 precipitation estimates at monthly and daily time scales

    Science.gov (United States)

    Katiraie-Boroujerdy, Pari-Sima; Akbari Asanjan, Ata; Hsu, Kuo-lin; Sorooshian, Soroosh

    2017-09-01

    In the first part of this paper, monthly precipitation data from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) and Tropical Rainfall Measuring Mission 3B42 algorithm Version 7 (TRMM-3B42V7) are evaluated over Iran using the Generalized Three-Cornered Hat (GTCH) method which is self-sufficient of reference data as input. Climate Data Unit (CRU) is added to the GTCH evaluations as an independent gauge-based dataset thus, the minimum requirement of three datasets for the model is satisfied. To ensure consistency of all datasets, the two satellite products were aggregated to 0.5° spatial resolution, which is the minimum resolution of CRU. The results show that the PERSIANN-CDR has higher Signal to Noise Ratio (SNR) than TRMM-3B42V7 for the monthly rainfall estimation, especially in the northern half of the country. All datasets showed low SNR in the mountainous area of southwestern Iran, as well as the arid parts in the southeast region of the country. Additionally, in order to evaluate the efficacy of PERSIANN-CDR and TRMM-3B42V7 in capturing extreme daily-precipitation amounts, an in-situ rain-gauge dataset collected by the Islamic Republic of the Iran Meteorological Organization (IRIMO) was employed. Given the sparsity of the rain gauges, only 0.25° pixels containing three or more gauges were used for this evaluation. There were 228 such pixels where daily and extreme rainfall from PERSIANN-CDR and TRMM-3B42V7 could be compared. However, TRMM-3B42V7 overestimates most of the intensity indices (correlation coefficients; R between 0.7648-0.8311, Root Mean Square Error; RMSE between 3.29mm/day-21.2mm/5day); PERSIANN-CDR underestimates these extremes (R between 0.6349-0.7791 and RMSE between 3.59mm/day-30.56mm/5day). Both satellite products show higher correlation coefficients and lower RMSEs for the annual mean of consecutive dry spells than wet spells. The results show that TRMM-3B42V7

  14. Optimizing Satellite-Based Precipitation Estimation for Nowcasting of Rainfall and Flash Flood Events over the South African Domain

    OpenAIRE

    Estelle de Coning

    2013-01-01

    The South African Weather Service is mandated to issue warnings of hazardous weather events, including those related to heavy precipitation, in order to safeguard life and property. Flooding and flash flood events are common in South Africa. Frequent updates and real-time availability of precipitation data are crucial to support hydrometeorological warning services. Satellite rainfall estimation provides a very important data source for flash flood guidance systems as well as nowcasting of pr...

  15. Estimating spatially and temporally varying recharge and runoff from precipitation and urban irrigation in the Los Angeles Basin, California

    Science.gov (United States)

    Hevesi, Joseph A.; Johnson, Tyler D.

    2016-10-17

    A daily precipitation-runoff model, referred to as the Los Angeles Basin watershed model (LABWM), was used to estimate recharge and runoff for a 5,047 square kilometer study area that included the greater Los Angeles area and all surface-water drainages potentially contributing recharge to a 1,450 square kilometer groundwater-study area underlying the greater Los Angeles area, referred to as the Los Angeles groundwater-study area. The recharge estimates for the Los Angeles groundwater-study area included spatially distributed recharge in response to the infiltration of precipitation, runoff, and urban irrigation, as well as mountain-front recharge from surface-water drainages bordering the groundwater-study area. The recharge and runoff estimates incorporated a new method for estimating urban irrigation, consisting of residential and commercial landscape watering, based on land use and the percentage of pervious land area.The LABWM used a 201.17-meter gridded discretization of the study area to represent spatially distributed climate and watershed characteristics affecting the surface and shallow sub-surface hydrology for the Los Angeles groundwater study area. Climate data from a local network of 201 monitoring sites and published maps of 30-year-average monthly precipitation and maximum and minimum air temperature were used to develop the climate inputs for the LABWM. Published maps of land use, land cover, soils, vegetation, and surficial geology were used to represent the physical characteristics of the LABWM area. The LABWM was calibrated to available streamflow records at six streamflow-gaging stations.Model results for a 100-year target-simulation period, from water years 1915 through 2014, were used to quantify and evaluate the spatial and temporal variability of water-budget components, including evapotranspiration (ET), recharge, and runoff. The largest outflow of water from the LABWM was ET; the 100-year average ET rate of 362 millimeters per year (mm

  16. Improvement and quantitative performance estimation of the back support muscle suit.

    Science.gov (United States)

    Muramatsu, Y; Umehara, H; Kobayashi, H

    2013-01-01

    We have been developing the wearable muscle suit for direct and physical motion supports. The use of the McKibben artificial muscle has opened the way to the introduction of "muscle suits" compact, lightweight, reliable, wearable "assist-bots" enabling manual worker to lift and carry weights. Since back pain is the most serious problem for manual worker, improvement of the back support muscle suit under the feasibility study and quantitative estimation are shown in this paper. The structure of the upper body frame, the method to attach to the body, and the axes addition were explained as for the improvement. In the experiments, we investigated quantitative performance results and efficiency of the back support muscle suit in terms of vertical lifting of heavy weights by employing integral electromyography (IEMG). The results indicated that the values of IEMG were reduced by about 40% by using the muscle suit.

  17. Direct Estimation of Optical Parameters From Photoacoustic Time Series in Quantitative Photoacoustic Tomography.

    Science.gov (United States)

    Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja

    2016-11-01

    Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.

  18. Modified DTW for a quantitative estimation of the similarity between rainfall time series

    Science.gov (United States)

    Djallel Dilmi, Mohamed; Barthès, Laurent; Mallet, Cécile; Chazottes, Aymeric

    2017-04-01

    The Precipitations are due to complex meteorological phenomenon and can be described as intermittent process. The spatial and temporal variability of this phenomenon is significant and covers large scales. To analyze and model this variability and / or structure, several studies use a network of rain gauges providing several time series of precipitation measurements. To compare these different time series, the authors compute for each time series some parameters (PDF, rain peak intensity, occurrence, amount, duration, intensity …). However, and despite the calculation of these parameters, the comparison of the parameters between two series of measurements remains qualitative. Due to the advection processes, when different sensors of an observation network measure precipitation time series identical in terms of intermitency or intensities, there is a time lag between the different measured series. Analyzing and extracting relevant information on physical phenomena from these precipitation time series implies the development of automatic analytical methods capable of comparing two time series of precipitation measured by different sensors or at two different locations and thus quantifying the difference / similarity. The limits of the Euclidean distance to measure the similarity between the time series of precipitation have been well demonstrated and explained (eg the Euclidian distance is indeed very sensitive to the effects of phase shift : between two identical but slightly shifted time series, this distance is not negligible). To quantify and analysis these time lag, the correlation functions are well established, normalized and commonly used to measure the spatial dependences that are required by many applications. However, authors generally observed that there is always a considerable scatter of the inter-rain gauge correlation coefficients obtained from the individual pairs of rain gauges. Because of a substantial dispersion of estimated time lag, the

  19. Detection and parameter estimation for quantitative trait loci using regression models and multiple markers

    Directory of Open Access Journals (Sweden)

    Schook Lawrence B

    2000-07-01

    Full Text Available Abstract A strategy of multi-step minimal conditional regression analysis has been developed to determine the existence of statistical testing and parameter estimation for a quantitative trait locus (QTL that are unaffected by linked QTLs. The estimation of marker-QTL recombination frequency needs to consider only three cases: 1 the chromosome has only one QTL, 2 one side of the target QTL has one or more QTLs, and 3 either side of the target QTL has one or more QTLs. Analytical formula was derived to estimate marker-QTL recombination frequency for each of the three cases. The formula involves two flanking markers for case 1, two flanking markers plus a conditional marker for case 2, and two flanking markers plus two conditional markers for case 3. Each QTL variance and effect, and the total QTL variance were also estimated using analytical formulae. Simulation data show that the formulae for estimating marker-QTL recombination frequency could be a useful statistical tool for fine QTL mapping. With 1 000 observations, a QTL could be mapped to a narrow chromosome region of 1.5 cM if no linked QTL is present, and to a 2.8 cM chromosome region if either side of the target QTL has at least one linked QTL.

  20. Novel Sessile Drop Software for Quantitative Estimation of Slag Foaming in Carbon/Slag Interactions

    Science.gov (United States)

    Khanna, Rita; Rahman, Mahfuzur; Leow, Richard; Sahajwalla, Veena

    2007-08-01

    Novel video-processing software has been developed for the sessile drop technique for a rapid and quantitative estimation of slag foaming. The data processing was carried out in two stages: the first stage involved the initial transformation of digital video/audio signals into a format compatible with computing software, and the second stage involved the computation of slag droplet volume and area of contact in a chosen video frame. Experimental results are presented on slag foaming from synthetic graphite/slag system at 1550 °C. This technique can be used for determining the extent and stability of foam as a function of time.

  1. Diagnosis and quantitative estimation of pulmonary congestion or edema by pulmonary CT numbers

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Shiro; Nakamoto, Takaaki

    1987-12-01

    Pulmonary computed tomography (CT) was performed in 25 patients with left heart failure and 10 healthy persons to diagnose pulmonary congestion or edema associated with left heart failure. In an analysis of histogram for pulmonary CT numbers obtained from CT scans, CT numbers indicating pulmonary edema were defined as -650 to -750 H.U. This allowed pulmonary edema to be quantitatively estimated early when abnormal findings were not available on chest X-ray film or pulmonary circulation studies. Histograms for CT numbers could be displayed by colors on CT scans. (Namekawa, K.).

  2. RESERCH CONCERNING THE ESTIMATE OF QUANTITATIVE AND QUALITATIVE PHYSIOLOGICAL GROUP BACTERIA IN PEATS SAMPLE

    Directory of Open Access Journals (Sweden)

    ADRIANA CRISTE

    2013-12-01

    Full Text Available The total aerobe micro flora can be determined on solid mediums for the aerobe bacteria and this relive quantity of micro organisms from the peat samples. The quantitative evaluation was done using solid nutritive mediums which allows the estimation of nr CFU/g as well observing the morphology of the colonies and their utility through their emplacement and morphological and biochemical characterization of isolated strains.. The evaluations where done through the method of dilution, using selective liquid mediums. Every day the characteristic reaction of the respective group was observed, either through the metabolising of the substrate, or through the appearance of a catabolic product in the medium.

  3. Application of quantitative structure-property relationship analysis to estimate the vapor pressure of pesticides.

    Science.gov (United States)

    Goodarzi, Mohammad; Coelho, Leandro dos Santos; Honarparvar, Bahareh; Ortiz, Erlinda V; Duchowicz, Pablo R

    2016-06-01

    The application of molecular descriptors in describing Quantitative Structure Property Relationships (QSPR) for the estimation of vapor pressure (VP) of pesticides is of ongoing interest. In this study, QSPR models were developed using multiple linear regression (MLR) methods to predict the vapor pressure values of 162 pesticides. Several feature selection methods, namely the replacement method (RM), genetic algorithms (GA), stepwise regression (SR) and forward selection (FS), were used to select the most relevant molecular descriptors from a pool of variables. The optimum subset of molecular descriptors was used to build a QSPR model to estimate the vapor pressures of the selected pesticides. The Replacement Method improved the predictive ability of vapor pressures and was more reliable for the feature selection of these selected pesticides. The results provided satisfactory MLR models that had a satisfactory predictive ability, and will be important for predicting vapor pressure values for compounds with unknown values. This study may open new opportunities for designing and developing new pesticide.

  4. Quantitative estimation of activity and quality for collections of functional genetic elements.

    Science.gov (United States)

    Mutalik, Vivek K; Guimaraes, Joao C; Cambray, Guillaume; Mai, Quynh-Anh; Christoffersen, Marc Juul; Martin, Lance; Yu, Ayumi; Lam, Colin; Rodriguez, Cesar; Bennett, Gaymon; Keasling, Jay D; Endy, Drew; Arkin, Adam P

    2013-04-01

    The practice of engineering biology now depends on the ad hoc reuse of genetic elements whose precise activities vary across changing contexts. Methods are lacking for researchers to affordably coordinate the quantification and analysis of part performance across varied environments, as needed to identify, evaluate and improve problematic part types. We developed an easy-to-use analysis of variance (ANOVA) framework for quantifying the performance of genetic elements. For proof of concept, we assembled and analyzed combinations of prokaryotic transcription and translation initiation elements in Escherichia coli. We determined how estimation of part activity relates to the number of unique element combinations tested, and we show how to estimate expected ensemble-wide part activity from just one or two measurements. We propose a new statistic, biomolecular part 'quality', for tracking quantitative variation in part performance across changing contexts.

  5. Estimation of genetic parameters and detection of quantitative trait loci for metabolites in Danish Holstein milk

    DEFF Research Database (Denmark)

    Buitenhuis, Albert Johannes; Sundekilde, Ulrik; Poulsen, Nina Aagaard;

    2013-01-01

    Small components and metabolites in milk are significant for the utilization of milk, not only in dairy food production but also as disease predictors in dairy cattle. This study focused on estimation of genetic parameters and detection of quantitative trait loci for metabolites in bovine milk....... For this purpose, milk samples were collected in mid lactation from 371 Danish Holstein cows in first to third parity. A total of 31 metabolites were detected and identified in bovine milk by using 1H nuclear magnetic resonance (NMR) spectroscopy. Cows were genotyped using a bovine high-density single nucleotide...... polymorphism (SNP) chip. Based on the SNP data, a genomic relationship matrix was calculated and used as a random factor in a model together with 2 fixed factors (herd and lactation stage) to estimate the heritability and breeding value for individual metabolites in the milk. Heritability was in the range of 0...

  6. Quantitative Cyber Risk Reduction Estimation Methodology for a Small Scada Control System

    Energy Technology Data Exchange (ETDEWEB)

    Miles A. McQueen; Wayne F. Boyer; Mark A. Flynn; George A. Beitel

    2006-01-01

    We propose a new methodology for obtaining a quick quantitative measurement of the risk reduction achieved when a control system is modified with the intent to improve cyber security defense against external attackers. The proposed methodology employs a directed graph called a compromise graph, where the nodes represent stages of a potential attack and the edges represent the expected time-to-compromise for differing attacker skill levels. Time-to-compromise is modeled as a function of known vulnerabilities and attacker skill level. The methodology was used to calculate risk reduction estimates for a specific SCADA system and for a specific set of control system security remedial actions. Despite an 86% reduction in the total number of vulnerabilities, the estimated time-to-compromise was increased only by about 3 to 30% depending on target and attacker skill level.

  7. Estimation of financial loss ratio for E-insurance:a quantitative model

    Institute of Scientific and Technical Information of China (English)

    钟元生; 陈德人; 施敏华

    2002-01-01

    In view of the risk of E-commerce and the response of the insurance industry to it, this paper is aimed at one important point of insurance, that is, estimation of financial loss ratio, which is one of the most difficult problems facing the E-insurance industry. This paper proposes a quantitative analyzing model for estimating E-insurance financial loss ratio. The model is based on gross income per enterprise and CSI/FBI computer crime and security survey. The analysis results presented are reasonable and valuable for both insurer and the insured and thus can be accepted by both of them. What we must point out is that according to our assumption, the financial loss ratio varied very little, 0.233% in 1999 and 0.236% in 2000 although there was much variation in the main data of the CSI/FBI survey.

  8. Estimation of qualitative and quantitative characteristics interrelation, having an impact on amount of tourists in hospitality industry

    Directory of Open Access Journals (Sweden)

    Tatyana P. Levchenko

    2011-01-01

    Full Text Available The article considers methods of estimation of qualitative and quantitative characteristics interrelation, having impact on amount of tourists in hospitality industry, offers the latest technologies of the given indicators calculation.

  9. A hierarchical statistical model for estimating population properties of quantitative genes

    Directory of Open Access Journals (Sweden)

    Wu Rongling

    2002-06-01

    Full Text Available Abstract Background Earlier methods for detecting major genes responsible for a quantitative trait rely critically upon a well-structured pedigree in which the segregation pattern of genes exactly follow Mendelian inheritance laws. However, for many outcrossing species, such pedigrees are not available and genes also display population properties. Results In this paper, a hierarchical statistical model is proposed to monitor the existence of a major gene based on its segregation and transmission across two successive generations. The model is implemented with an EM algorithm to provide maximum likelihood estimates for genetic parameters of the major locus. This new method is successfully applied to identify an additive gene having a large effect on stem height growth of aspen trees. The estimates of population genetic parameters for this major gene can be generalized to the original breeding population from which the parents were sampled. A simulation study is presented to evaluate finite sample properties of the model. Conclusions A hierarchical model was derived for detecting major genes affecting a quantitative trait based on progeny tests of outcrossing species. The new model takes into account the population genetic properties of genes and is expected to enhance the accuracy, precision and power of gene detection.

  10. The quantitative estimation of the vulnerability of brick and concrete wall impacted by an experimental boulder

    Science.gov (United States)

    Zhang, J.; Guo, Z. X.; Wang, D.; Qian, H.

    2016-02-01

    There is little historic data about the vulnerability of damaged elements due to debris flow events in China. Therefore, it is difficult to quantitatively estimate the vulnerable elements suffered by debris flows. This paper is devoted to the research of the vulnerability of brick and concrete walls impacted by debris flows. An experimental boulder (an iron sphere) was applied to be the substitute of debris flow since it can produce similar shape impulse load on elements as debris flow. Several walls made of brick and concrete were constructed in prototype dimensions to physically simulate the damaged structures in debris flows. The maximum impact force was measured, and the damage conditions of the elements (including cracks and displacements) were collected, described and compared. The failure criterion of brick and concrete wall was proposed with reference to the structure characteristics as well as the damage pattern caused by debris flows. The quantitative estimation of the vulnerability of brick and concrete wall was finally established based on fuzzy mathematics and the proposed failure criterion. Momentum, maximum impact force and maximum impact bending moment were compared to be the best candidate for disaster intensity index. The results show that the maximum impact bending moment seems to be most suitable for the disaster intensity index in establishing vulnerability curve and formula.

  11. How reliable are satellite precipitation estimates for driving hydrological models: a verification study over the Mediterranean area

    Science.gov (United States)

    Camici, Stefania; Ciabatta, Luca; Massari, Christian; Brocca, Luca

    2017-04-01

    Floods are one of the most common and dangerous natural hazards, causing every year thousands of casualties and damages worldwide. The main tool for assessing flood risk and reducing damages is represented by hydrologic early warning systems that allow to forecast flood events by using real time data obtained through ground monitoring networks (e.g., raingauges and radars). However, the use of such data, mainly rainfall, presents some issues firstly related to the network density and to the limited spatial representativeness of local measurements. A way to overcome these issues may be the use of satellite-based rainfall products (SRPs) that nowadays are available on a global scale at ever increasing spatial/temporal resolution and accuracy. However, despite the large availability and increased accuracy of SRPs (e.g., the Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA); the Satellite Application Facility on Support to Operational Hydrology and Water Management (H-SAF); and the recent Global Precipitation Measurement (GPM) mission), remotely sensed rainfall data are scarcely used in hydrological modeling and only a small number of studies have been carried out to outline some guidelines for using satellite data as input for hydrological modelling. Reasons may be related to: 1) the large bias characterizing satellite precipitation estimates, which is dependent on rainfall intensity and season, 2) the spatial/temporal resolution, 3) the timeliness, which is often insufficient for operational purposes, and 4) a general (often not justified) skepticism of the hydrological community in the use of satellite products for land applications. The objective of this study is to explore the feasibility of using SRPs in a lumped hydrologic model (MISDc, "Modello Idrologico Semi-Distribuito in continuo", Masseroni et al., 2017) over 10 basins in the Mediterranean area with different sizes and physiographic characteristics. Specifically

  12. A quantitative approach for comparing modeled biospheric carbon flux estimates across regional scales

    Directory of Open Access Journals (Sweden)

    D. N. Huntzinger

    2010-10-01

    Full Text Available Given the large differences between biospheric model estimates of regional carbon exchange, there is a need to understand and reconcile the predicted spatial variability of fluxes across models. This paper presents a set of quantitative tools that can be applied for comparing flux estimates in light of the inherent differences in model formulation. The presented methods include variogram analysis, variable selection, and geostatistical regression. These methods are evaluated in terms of their ability to assess and identify differences in spatial variability in flux estimates across North America among a small subset of models, as well as differences in the environmental drivers that appear to have the greatest control over the spatial variability of predicted fluxes. The examined models are the Simple Biosphere (SiB 3.0, Carnegie Ames Stanford Approach (CASA, and CASA coupled with the Global Fire Emissions Database (CASA GFEDv2, and the analyses are performed on model-predicted net ecosystem exchange, gross primary production, and ecosystem respiration. Variogram analysis reveals consistent seasonal differences in spatial variability among modeled fluxes at a 1°×1° spatial resolution. However, significant differences are observed in the overall magnitude of the carbon flux spatial variability across models, in both net ecosystem exchange and component fluxes. Results of the variable selection and geostatistical regression analyses suggest fundamental differences between the models in terms of the factors that control the spatial variability of predicted flux. For example, carbon flux is more strongly correlated with percent land cover in CASA GFEDv2 than in SiB or CASA. Some of these factors can be linked back to model formulation, and would have been difficult to identify simply by comparing net fluxes between models. Overall, the quantitative approach presented here provides a set of tools for comparing predicted grid-scale fluxes across

  13. Application of short-wave infrared (SWIR) spectroscopy in quantitative estimation of clay mineral contents

    Science.gov (United States)

    You, Jinfeng; Xing, Lixin; Liang, Liheng; Pan, Jun; Meng, Tao

    2014-03-01

    Clay minerals are significant constituents of soil which are necessary for life. This paper studied three types of clay minerals, kaolinite, illite, and montmorillonite, for they are not only the most common soil forming materials, but also important indicators of soil expansion and shrinkage potential. These clay minerals showed diagnostic absorption bands resulting from vibrations of hydroxyl groups and structural water molecules in the SWIR wavelength region. The short-wave infrared reflectance spectra of the soil was obtained from a Portable Near Infrared Spectrometer (PNIS, spectrum range: 1300~2500 nm, interval: 2 nm). Due to the simplicity, quickness, and the non-destructiveness analysis, SWIR spectroscopy has been widely used in geological prospecting, chemical engineering and many other fields. The aim of this study was to use multiple linear regression (MLR) and partial least squares (PLS) regression to establish the optimizing quantitative estimation models of the kaolinite, illite and montmorillonite contents from soil reflectance spectra. Here, the soil reflectance spectra mainly refers to the spectral reflectivity of soil (SRS) corresponding to the absorption-band position (AP) of kaolinite, illite, and montmorillonite representative spectra from USGS spectral library, the SRS corresponding to the AP of soil spectral and soil overall spectrum reflectance values. The optimal estimation models of three kinds of clay mineral contents showed that the retrieval accuracy was satisfactory (Kaolinite content: a Root Mean Square Error of Calibration (RMSEC) of 1.671 with a coefficient of determination (R2) of 0.791; Illite content: a RMSEC of 1.126 with a R2 of 0.616; Montmorillonite content: a RMSEC of 1.814 with a R2 of 0.707). Thus, the reflectance spectra of soil obtained form PNIS could be used for quantitative estimation of kaolinite, illite and montmorillonite contents in soil.

  14. A Novel Method of Quantitative Anterior Chamber Depth Estimation Using Temporal Perpendicular Digital Photography

    Science.gov (United States)

    Zamir, Ehud; Kong, George Y.X.; Kowalski, Tanya; Coote, Michael; Ang, Ghee Soon

    2016-01-01

    Purpose We hypothesize that: (1) Anterior chamber depth (ACD) is correlated with the relative anteroposterior position of the pupillary image, as viewed from the temporal side. (2) Such a correlation may be used as a simple quantitative tool for estimation of ACD. Methods Two hundred sixty-six phakic eyes had lateral digital photographs taken from the temporal side, perpendicular to the visual axis, and underwent optical biometry (Nidek AL scanner). The relative anteroposterior position of the pupillary image was expressed using the ratio between: (1) lateral photographic temporal limbus to pupil distance (“E”) and (2) lateral photographic temporal limbus to cornea distance (“Z”). In the first chronological half of patients (Correlation Series), E:Z ratio (EZR) was correlated with optical biometric ACD. The correlation equation was then used to predict ACD in the second half of patients (Prediction Series) and compared to their biometric ACD for agreement analysis. Results A strong linear correlation was found between EZR and ACD, R = −0.91, R2 = 0.81. Bland-Altman analysis showed good agreement between predicted ACD using this method and the optical biometric ACD. The mean error was −0.013 mm (range −0.377 to 0.336 mm), standard deviation 0.166 mm. The 95% limits of agreement were ±0.33 mm. Conclusions Lateral digital photography and EZR calculation is a novel method to quantitatively estimate ACD, requiring minimal equipment and training. Translational Relevance EZ ratio may be employed in screening for angle closure glaucoma. It may also be helpful in outpatient medical clinic settings, where doctors need to judge the safety of topical or systemic pupil-dilating medications versus their risk of triggering acute angle closure glaucoma. Similarly, non ophthalmologists may use it to estimate the likelihood of acute angle closure glaucoma in emergency presentations. PMID:27540496

  15. Empirical model for mean temperature for Indian zone and estimation of precipitable water vapor from ground based GPS measurements

    Directory of Open Access Journals (Sweden)

    C. Suresh Raju

    2007-10-01

    Full Text Available Estimation of precipitable water (PW in the atmosphere from ground-based Global Positioning System (GPS essentially involves modeling the zenith hydrostatic delay (ZHD in terms of surface Pressure (Ps and subtracting it from the corresponding values of zenith tropospheric delay (ZTD to estimate the zenith wet (non-hydrostatic delay (ZWD. This further involves establishing an appropriate model connecting PW and ZWD, which in its simplest case assumed to be similar to that of ZHD. But when the temperature variations are large, for the accurate estimate of PW the variation of the proportionality constant connecting PW and ZWD is to be accounted. For this a water vapor weighted mean temperature (Tm has been defined by many investigations, which has to be modeled on a regional basis. For estimating PW over the Indian region from GPS data, a region specific model for Tm in terms of surface temperature (Ts is developed using the radiosonde measurements from eight India Meteorological Department (IMD stations spread over the sub-continent within a latitude range of 8.5°–32.6° N. Following a similar procedure Tm-based models are also evolved for each of these stations and the features of these site-specific models are compared with those of the region-specific model. Applicability of the region-specific and site-specific Tm-based models in retrieving PW from GPS data recorded at the IGS sites Bangalore and Hyderabad, is tested by comparing the retrieved values of PW with those estimated from the altitude profile of water vapor measured using radiosonde. The values of ZWD estimated at 00:00 UTC and 12:00 UTC are used to test the validity of the models by estimating the PW using the models and comparing it with those obtained from radiosonde data. The region specific Tm-based model is found to be in par with if not better than a

  16. Evaluation of TMPA 3B42 Precipitation Estimates during the Passage of Tropical Cyclones over New Caledonia

    Science.gov (United States)

    Deo, Anil; Walsh, Kevin J. E.; Peltier, Alexandre

    2017-08-01

    This study evaluates the Tropical Rainfall Measuring Mission (TRMM) Multi-Satellite Precipitation Analysis (TMPA) 3B42 version 7 (V7) estimates of tropical cyclone (TC) rainfall over New Caledonia using the island rain gauge observations as the ground-truth reference. Several statistical measures and techniques are utilised to characterise the difference and similarity between TMPA and the gauge observations. The results show that TMPA has skill in representing the observed rainfall during the passage of TCs. TMPA overestimates light rainfall events and underestimates moderate to higher rainfall events. The skill deteriorates with increasing elevation, as underestimation by TMPA is greater at higher altitudes. The ability of TMPA also varies with TC intensity and distance from the TC centre, whereby it is more skilful for less intense TCs (category 1-2) and near the TC centre than in the outer rainbands. The ability of TMPA varies from case to case but a better performance is shown for TCs with a higher average rainfall. Finally, case studies of TC Vania (2011), TC Innis (2009), and TC Erica (2003) show that TMPA has the ability to represent the spatial distribution of the observed rainfall, but it tends to underestimate the higher rainfall events.

  17. Recursive estimators of mean-areal and local bias in precipitation products that account for conditional bias

    Science.gov (United States)

    Zhang, Yu; Seo, Dong-Jun

    2017-03-01

    This paper presents novel formulations of Mean field bias (MFB) and local bias (LB) correction schemes that incorporate conditional bias (CB) penalty. These schemes are based on the operational MFB and LB algorithms in the National Weather Service (NWS) Multisensor Precipitation Estimator (MPE). By incorporating CB penalty in the cost function of exponential smoothers, we are able to derive augmented versions of recursive estimators of MFB and LB. Two extended versions of MFB algorithms are presented, one incorporating spatial variation of gauge locations only (MFB-L), and the second integrating both gauge locations and CB penalty (MFB-X). These two MFB schemes and the extended LB scheme (LB-X) are assessed relative to the original MFB and LB algorithms (referred to as MFB-O and LB-O, respectively) through a retrospective experiment over a radar domain in north-central Texas, and through a synthetic experiment over the Mid-Atlantic region. The outcome of the former experiment indicates that introducing the CB penalty to the MFB formulation leads to small, but consistent improvements in bias and CB, while its impacts on hourly correlation and Root Mean Square Error (RMSE) are mixed. Incorporating CB penalty in LB formulation tends to improve the RMSE at high rainfall thresholds, but its impacts on bias are also mixed. The synthetic experiment suggests that beneficial impacts are more conspicuous at low gauge density (9 per 58,000 km2), and tend to diminish at higher gauge density. The improvement at high rainfall intensity is partly an outcome of the conservativeness of the extended LB scheme. This conservativeness arises in part from the more frequent presence of negative eigenvalues in the extended covariance matrix which leads to no, or smaller incremental changes to the smoothed rainfall amounts.

  18. On-line estimation of the dissolved zinc concentration during ZnS precipitation in a continuous stirred tank reactor (CSTR)

    NARCIS (Netherlands)

    Grootscholten, T.I.M.; Keesman, K.J.; Lens, P.N.L.

    2008-01-01

    In this paper a method is presented to estimate the reaction term of zinc sulphide precipitation and the zinc concentration in a CSTR, using the read-out signal of a sulphide selective electrode. The reaction between zinc and sulphide is described by a non-linear model and therefore classical observ

  19. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    Science.gov (United States)

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L

    2013-05-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  20. Estimating the number of integrations in transformed plants by quantitative real-time PCR

    Directory of Open Access Journals (Sweden)

    Vaira Anna Maria

    2002-10-01

    Full Text Available Abstract Background When generating transformed plants, a first step in their characterization is to obtain, for each new line, an estimate of how many copies of the transgene have been integrated in the plant genome because this can deeply influence the level of transgene expression and the ease of stabilizing expression in following generations. This task is normally achieved by Southern analysis, a procedure that requires relatively large amounts of plant material and is both costly and labour-intensive. Moreover, in the presence of rearranged copies the estimates are not correct. New approaches to the problem could be of great help for plant biotechnologists. Results By using a quantitative real-time PCR method that requires limited preliminary optimisation steps, we achieved statistically significant estimates of 1, 2 and 3 copies of a transgene in the primary transformants. Furthermore, by estimating the copy number of both the gene of interest and the selectable marker gene, we show that rearrangements of the T-DNA are not the exception, and probably happen more often than usually recognised. Conclusions We have developed a rapid and reliable method to estimate the number of integrated copies following genetic transformation. Unlike other similar procedures, this method is not dependent on identical amplification efficiency between the PCR systems used and does not need preliminary information on a calibrator. Its flexibility makes it appropriate in those situations where an accurate optimisation of all reaction components is impossible or impractical. Finally, the quality of the information produced is higher than what can be obtained by Southern blot analysis.

  1. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    Directory of Open Access Journals (Sweden)

    Noah Zaitlen

    2013-05-01

    Full Text Available Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  2. Developing Daily Quantitative Damage Estimates From Geospatial Layers To Support Post Event Recovery

    Science.gov (United States)

    Woods, B. K.; Wei, L. H.; Connor, T. C.

    2014-12-01

    With the growth of natural hazard data available in near real-time it is increasingly feasible to deliver damage estimates caused by natural disasters. These estimates can be used in disaster management setting or by commercial entities to optimize the deployment of resources and/or routing of goods and materials. This work outlines an end-to-end, modular process to generate estimates of damage caused by severe weather. The processing stream consists of five generic components: 1) Hazard modules that provide quantitate data layers for each peril. 2) Standardized methods to map the hazard data to an exposure layer based on atomic geospatial blocks. 3) Peril-specific damage functions that compute damage metrics at the atomic geospatial block level. 4) Standardized data aggregators, which map damage to user-specific geometries. 5) Data dissemination modules, which provide resulting damage estimates in a variety of output forms. This presentation provides a description of this generic tool set, and an illustrated example using HWRF-based hazard data for Hurricane Arthur (2014). In this example, the Python-based real-time processing ingests GRIB2 output from the HWRF numerical model, dynamically downscales it in conjunctions with a land cover database using a multiprocessing pool, and a just-in-time compiler (JIT). The resulting wind fields are contoured, and ingested into a PostGIS database using OGR. Finally, the damage estimates are calculated at the atomic block level and aggregated to user-defined regions using PostgreSQL queries to construct application specific tabular and graphics output.

  3. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    Science.gov (United States)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  4. SPECTRAL FEATURE ANALYSIS FOR QUANTITATIVE ESTIMATION OF CYANOBACTERIA CHLOROPHYLL-A

    Directory of Open Access Journals (Sweden)

    Y. Lin

    2016-06-01

    Full Text Available In recent years, lake eutrophication caused a large of Cyanobacteria bloom which not only brought serious ecological disaster but also restricted the sustainable development of regional economy in our country. Chlorophyll-a is a very important environmental factor to monitor water quality, especially for lake eutrophication. Remote sensed technique has been widely utilized in estimating the concentration of chlorophyll-a by different kind of vegetation indices and monitoring its distribution in lakes, rivers or along coastline. For each vegetation index, its quantitative estimation accuracy for different satellite data might change since there might be a discrepancy of spectral resolution and channel center between different satellites. The purpose this paper is to analyze the spectral feature of chlorophyll-a with hyperspectral data (totally 651 bands and use the result to choose the optimal band combination for different satellites. The analysis method developed here in this study could be useful to recognize and monitor cyanobacteria bloom automatically and accrately. In our experiment, the reflectance (from 350nm to 1000nm of wild cyanobacteria in different consistency (from 0 to 1362.11ug/L and the corresponding chlorophyll-a concentration were measured simultaneously. Two kinds of hyperspectral vegetation indices were applied in this study: simple ratio (SR and narrow band normalized difference vegetation index (NDVI, both of which consists of any two bands in the entire 651 narrow bands. Then multivariate statistical analysis was used to construct the linear, power and exponential models. After analyzing the correlation between chlorophyll-a and single band reflectance, SR, NDVI respetively, the optimal spectral index for quantitative estimation of cyanobacteria chlorophyll-a, as well corresponding central wavelength and band width were extracted. Results show that: Under the condition of water disturbance, SR and NDVI are both suitable

  5. Spectral Feature Analysis for Quantitative Estimation of Cyanobacteria Chlorophyll-A

    Science.gov (United States)

    Lin, Yi; Ye, Zhanglin; Zhang, Yugan; Yu, Jie

    2016-06-01

    In recent years, lake eutrophication caused a large of Cyanobacteria bloom which not only brought serious ecological disaster but also restricted the sustainable development of regional economy in our country. Chlorophyll-a is a very important environmental factor to monitor water quality, especially for lake eutrophication. Remote sensed technique has been widely utilized in estimating the concentration of chlorophyll-a by different kind of vegetation indices and monitoring its distribution in lakes, rivers or along coastline. For each vegetation index, its quantitative estimation accuracy for different satellite data might change since there might be a discrepancy of spectral resolution and channel center between different satellites. The purpose this paper is to analyze the spectral feature of chlorophyll-a with hyperspectral data (totally 651 bands) and use the result to choose the optimal band combination for different satellites. The analysis method developed here in this study could be useful to recognize and monitor cyanobacteria bloom automatically and accrately. In our experiment, the reflectance (from 350nm to 1000nm) of wild cyanobacteria in different consistency (from 0 to 1362.11ug/L) and the corresponding chlorophyll-a concentration were measured simultaneously. Two kinds of hyperspectral vegetation indices were applied in this study: simple ratio (SR) and narrow band normalized difference vegetation index (NDVI), both of which consists of any two bands in the entire 651 narrow bands. Then multivariate statistical analysis was used to construct the linear, power and exponential models. After analyzing the correlation between chlorophyll-a and single band reflectance, SR, NDVI respetively, the optimal spectral index for quantitative estimation of cyanobacteria chlorophyll-a, as well corresponding central wavelength and band width were extracted. Results show that: Under the condition of water disturbance, SR and NDVI are both suitable for quantitative

  6. The overall impact of testing on medical student learning: quantitative estimation of consequential validity.

    Science.gov (United States)

    Kreiter, Clarence D; Green, Joseph; Lenoch, Susan; Saiki, Takuya

    2013-10-01

    Given medical education's longstanding emphasis on assessment, it seems prudent to evaluate whether our current research and development focus on testing makes sense. Since any intervention within medical education must ultimately be evaluated based upon its impact on student learning, this report seeks to provide a quantitative accounting of the learning gains attained through educational assessments. To approach this question, we estimate achieved learning within a medical school environment that optimally utilizes educational assessments. We compare this estimate to learning that might be expected in a medical school that employs no educational assessments. Effect sizes are used to estimate testing's total impact on learning by summarizing three effects; the direct effect, the indirect effect, and the selection effect. The literature is far from complete, but the available evidence strongly suggests that each of these effects is large and the net cumulative impact on learning in medical education is over two standard deviations. While additional evidence is required, the current literature shows that testing within medical education makes a strong positive contribution to learning.

  7. A method for estimating the effective number of loci affecting a quantitative character.

    Science.gov (United States)

    Slatkin, Montgomery

    2013-11-01

    A likelihood method is introduced that jointly estimates the number of loci and the additive effect of alleles that account for the genetic variance of a normally distributed quantitative character in a randomly mating population. The method assumes that measurements of the character are available from one or both parents and an arbitrary number of full siblings. The method uses the fact, first recognized by Karl Pearson in 1904, that the variance of a character among offspring depends on both the parental phenotypes and on the number of loci. Simulations show that the method performs well provided that data from a sufficient number of families (on the order of thousands) are available. This method assumes that the loci are in Hardy-Weinberg and linkage equilibrium but does not assume anything about the linkage relationships. It performs equally well if all loci are on the same non-recombining chromosome provided they are in linkage equilibrium. The method can be adapted to take account of loci already identified as being associated with the character of interest. In that case, the method estimates the number of loci not already known to affect the character. The method applied to measurements of crown-rump length in 281 family trios in a captive colony of African green monkeys (Chlorocebus aethiopus sabaeus) estimates the number of loci to be 112 and the additive effect to be 0.26 cm. A parametric bootstrap analysis shows that a rough confidence interval has a lower bound of 14 loci.

  8. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    Science.gov (United States)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations

  9. Annual and average estimates of water-budget components based on hydrograph separation and PRISM precipitation for gaged basins in the Appalachian Plateaus Region, 1900-2011

    Science.gov (United States)

    Nelms, David L.; Messinger, Terence; McCoy, Kurt J.

    2015-07-14

    As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.

  10. Estimating preseason irrigation losses by characterizing evaporation of effective precipitation under bare soil conditions using large weighing lysimeters

    Science.gov (United States)

    Irrigation scheduling is one of the most cost effective means of conserving limited groundwater resources, particularly in semi-arid regions. Effective precipitation, or the net amount of water from precipitation that can be used in field water balance equations, is essential to accurate and effecti...

  11. A novel HPTLC method for quantitative estimation of biomarkers in polyherbal formulation

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmed Sheikh; Sadia Shakeel; Somia Gul; Aqib Zahoor; Saleha Suleman Khan; Faisal Haider Zaidi; Khan Usmanghani

    2015-01-01

    Objective:To explore the quantitative estimation of biomarkers gallic acid and berberine in polyherbal formulation Entoban syrup. Methods: High performance thin layer chromatography was performed to evaluate the presence of gallic acid and berberine employing toluene:ethyl acetate:formic acid:methanol 12:9:4:0.5 (v/v/v/v) and ethanol: water: formic acid 90:9:1 (v/v/v), as a mobile phase respectively. Results:The Rf values (0.58) for gallic acid and (0.76) for berberine in both sample and reference standard were found comparable under UV light at 273 nm and 366 nm respectively. The high performance thin layer chromatography method developed for quantization was simple, accurate and specific. Conclusions: The present standardization provides specific and accurate tool to develop qualifications for identity, transparency and reproducibility of biomarkers in Entoban syrup.

  12. A novel HPTLC method for quantitative estimation of biomarkers in polyherbal formulation

    Institute of Scientific and Technical Information of China (English)

    Zeeshan; Ahmed; Sheikh; Sadia; Shakeel; Somia; Gul; Aqib; Zahoor; Saleha; Suleman; Khan; Faisal; Haider; Zaidi; Khan; Usmanghani

    2015-01-01

    Objective: To explore the quantitative estimation of biomarkers gallic acid and berberine in polyherbal formulation Entoban syrup.Methods: High performance thin layer chromatography was performed to evaluate the presence of gallic acid and berberine employing toluene: ethyl acetate: formic acid:methanol 12:9:4:0.5(v/v/v/v) and ethanol: water: formic acid 90:9:1(v/v/v), as a mobile phase respectively.Results: The R f values(0.58) for gallic acid and(0.76) for berberine in both sample and reference standard were found comparable under UV light at 273 nm and 366 nm respectively. The high performance thin layer chromatography method developed for quantization was simple, accurate and specific.Conclusions: The present standardization provides specific and accurate tool to develop qualifications for identity, transparency and reproducibility of biomarkers in Entoban syrup.

  13. Estimation of the patient monitor alarm rate for a quantitative analysis of new alarm settings.

    Science.gov (United States)

    de Waele, Stijn; Nielsen, Larry; Frassica, Joseph

    2014-01-01

    In many critical care units, default patient monitor alarm settings are not fine-tuned to the vital signs of the patient population. As a consequence there are many alarms. A large fraction of the alarms are not clinically actionable, thus contributing to alarm fatigue. Recent attention to this phenomenon has resulted in attempts in many institutions to decrease the overall alarm load of clinicians by altering the trigger thresholds for monitored parameters. Typically, new alarm settings are defined based on clinical knowledge and patient population norms and tried empirically on new patients without quantitative knowledge about the potential impact of these new settings. We introduce alarm regeneration as a method to estimate the alarm rate of new alarm settings using recorded patient monitor data. This method enables evaluation of several alarm setting scenarios prior to using these settings in the clinical setting. An expression for the alarm rate variance is derived for the calculation of statistical confidence intervals on the results.

  14. Method for quantitative estimation of position perception using a joystick during linear movement.

    Science.gov (United States)

    Wada, Y; Tanaka, M; Mori, S; Chen, Y; Sumigama, S; Naito, H; Maeda, M; Yamamoto, M; Watanabe, S; Kajitani, N

    1996-12-01

    We designed a method for quantitatively estimating self-motion perceptions during passive body movement on a sled. The subjects were instructed to tilt a joystick in proportion to perceived displacement from a giving starting position during linear movement with varying displacements of 4 m, 10 m and 16 m induced by constant acceleration of 0.02 g, 0.05 g and 0.08 g along the antero-posterior axis. With this method, we could monitor not only subjective position perceptions but also response latencies for the beginning (RLbgn) and end (RLend) of the linear movement. Perceived body position fitted Stevens' power law, where R=kSn (R is output of the joystick, k is a constant, S is the displacement from the linear movement and n is an exponent). RLbgn decreased as linear acceleration increased. We conclude that this method is useful in analyzing the features and sensitivities of self-motion perceptions during movement.

  15. Accuracy in the estimation of quantitative minimal area from the diversity/area curve.

    Science.gov (United States)

    Vives, Sergi; Salicrú, Miquel

    2005-05-01

    The problem of representativity is fundamental in ecological studies. A qualitative minimal area that gives a good representation of species pool [C.M. Bouderesque, Methodes d'etude qualitative et quantitative du benthos (en particulier du phytobenthos), Tethys 3(1) (1971) 79] can be discerned from a quantitative minimal area which reflects the structural complexity of community [F.X. Niell, Sobre la biologia de Ascophyllum nosodum (L.) Le Jolis en Galicia, Invest. Pesq. 43 (1979) 501]. This suggests that the populational diversity can be considered as the value of the horizontal asymptote corresponding to the curve sample diversity/biomass [F.X. Niell, Les applications de l'index de Shannon a l'etude de la vegetation interdidale, Soc. Phycol. Fr. Bull. 19 (1974) 238]. In this study we develop a expression to determine minimal areas and use it to obtain certain information about the community structure based on diversity/area curve graphs. This expression is based on the functional relationship between the expected value of the diversity and the sample size used to estimate it. In order to establish the quality of the estimation process, we obtained the confidence intervals as a particularization of the functional (h-phi)-entropies proposed in [M. Salicru, M.L. Menendez, D. Morales, L. Pardo, Asymptotic distribution of (h,phi)-entropies, Commun. Stat. (Theory Methods) 22 (7) (1993) 2015]. As an example used to demonstrate the possibilities of this method, and only for illustrative purposes, data about a study on the rocky intertidal seawed populations in the Ria of Vigo (N.W. Spain) are analyzed [F.X. Niell, Estudios sobre la estructura, dinamica y produccion del Fitobentos intermareal (Facies rocosa) de la Ria de Vigo. Ph.D. Mem. University of Barcelona, Barcelona, 1979].

  16. Estimation of multipath transmission parameters for quantitative ultrasound measurements of bone.

    Science.gov (United States)

    Dencks, Stefanie; Schmitz, Georg

    2013-09-01

    When applying quantitative ultrasound (QUS) measurements to bone for predicting osteoporotic fracture risk, the multipath transmission of sound waves frequently occurs. In the last 10 years, the interest in separating multipath QUS signals for their analysis awoke, and led to the introduction of several approaches. Here, we compare the performances of the two fastest algorithms proposed for QUS measurements of bone: the modified least-squares Prony method (MLSP), and the space alternating generalized expectation maximization algorithm (SAGE) applied in the frequency domain. In both approaches, the parameters of the transfer functions of the sound propagation paths are estimated. To provide an objective measure, we also analytically derive the Cramér-Rao lower bound of variances for any estimator and arbitrary transmit signals. In comparison with results of Monte Carlo simulations, this measure is used to evaluate both approaches regarding their accuracy and precision. Additionally, with simulations using typical QUS measurement settings, we illustrate the limitations of separating two superimposed waves for varying parameters with focus on their temporal separation. It is shown that for good SNRs around 100 dB, MLSP yields better results when two waves are very close. Additionally, the parameters of the smaller wave are more reliably estimated. If the SNR decreases, the parameter estimation with MLSP becomes biased and inefficient. Then, the robustness to noise of the SAGE clearly prevails. Because a clear influence of the interrelation between the wavelength of the ultrasound signals and their temporal separation is observable on the results, these findings can be transferred to QUS measurements at other sites. The choice of the suitable algorithm thus depends on the measurement conditions.

  17. Fatalities in high altitude mountaineering: a review of quantitative risk estimates.

    Science.gov (United States)

    Weinbruch, Stephan; Nordby, Karl-Christian

    2013-12-01

    Quantitative estimates for mortality in high altitude mountaineering are reviewed. Special emphasis is placed on the heterogeneity of the risk estimates and on confounding. Crude estimates for mortality are on the order of 1/1000 to 40/1000 persons above base camp, for both expedition members and high altitude porters. High altitude porters have mostly a lower risk than expedition members (risk ratio for all Nepalese peaks requiring an expedition permit: 0.73; 95 % confidence interval 0.59-0.89). The summit bid is generally the most dangerous part of an expedition for members, whereas most high altitude porters die during route preparation. On 8000 m peaks, the mortality during descent from summit varies between 4/1000 and 134/1000 summiteers (members plus porters). The risk estimates are confounded by human and environmental factors. Information on confounding by gender and age is contradictory and requires further work. There are indications for safety segregation of men and women, with women being more risk averse than men. Citizenship appears to be a significant confounder. Prior high altitude mountaineering experience in Nepal has no protective effect. Commercial expeditions in the Nepalese Himalayas have a lower mortality than traditional expeditions, though after controlling for confounding, the difference is not statistically significant. The overall mortality is increasing with increasing peak altitude for expedition members but not for high altitude porters. In the Nepalese Himalayas and in Alaska, a significant decrease of mortality with calendar year was observed. A few suggestions for further work are made at the end of the article.

  18. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    Science.gov (United States)

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  19. Estimating precipitation on early Mars using a radiative-convective model of the atmosphere and comparison with inferred runoff from geomorphology

    CERN Document Server

    von Paris, P; Grenfell, J L; Hauber, E; Breuer, D; Jaumann, R; Rauer, H; Tirsch, D

    2014-01-01

    We compare estimates of atmospheric precipitation during the Martian Noachian-Hesperian boundary 3.8 Gyr ago as calculated in a radiative-convective column model of the atmosphere with runoff values estimated from a geomorphological analysis of dendritic valley network discharge rates. In the atmospheric model, we assume CO2-H2O-N2 atmospheres with surface pressures varying from 20 mb to 3 bar with input solar luminosity reduced to 75% the modern value. Results from the valley network analysis are of the order of a few mm d-1 liquid water precipitation (1.5-10.6 mm d-1, with a median of 3.1 mm d-1). Atmospheric model results are much lower, from about 0.001-1 mm d-1 of snowfall (depending on CO2 partial pressure). Hence, the atmospheric model predicts a significantly lower amount of precipitated water than estimated from the geomorphological analysis. Furthermore, global mean surface temperatures are below freezing, i.e. runoff is most likely not directly linked to precipitation. Therefore, our results strong...

  20. Quantitative estimation of carbonation and chloride penetration in reinforced concrete by laser-induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Eto, Shuzo, E-mail: eto@criepi.denken.or.jp [Central Research Institute of Electric Power Industry, 2-6-1 Nagasaka, Yokosuka, Kanagawa 240-0196 (Japan); Matsuo, Toyofumi; Matsumura, Takuro; Fujii, Takashi [Central Research Institute of Electric Power Industry, 2-6-1 Nagasaka, Yokosuka, Kanagawa 240-0196 (Japan); Tanaka, Masayoshi Y. [Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, 2-6-1 Nagasaka, Yokosuka, Kanagawa 240-0196 (Japan)

    2014-11-01

    The penetration profile of chlorine in a reinforced concrete (RC) specimen was determined by laser-induced breakdown spectroscopy (LIBS). The concrete core was prepared from RC beams with cracking damage induced by bending load and salt water spraying. LIBS was performed using a specimen that was obtained by splitting the concrete core, and the line scan of laser pulses gave the two-dimensional emission intensity profiles of 100 × 80 mm{sup 2} within one hour. The two-dimensional profile of the emission intensity suggests that the presence of the crack had less effect on the emission intensity when the measurement interval was larger than the crack width. The chlorine emission spectrum was measured without using the buffer gas, which is usually used for chlorine measurement, by collinear double-pulse LIBS. The apparent diffusion coefficient, which is one of the most important parameters for chloride penetration in concrete, was estimated using the depth profile of chlorine emission intensity and Fick's law. The carbonation depth was estimated on the basis of the relationship between carbon and calcium emission intensities. When the carbon emission intensity was statistically higher than the calcium emission intensity at the measurement point, we determined that the point was carbonated. The estimation results were consistent with the spraying test results using phenolphthalein solution. These results suggest that the quantitative estimation by LIBS of carbonation depth and chloride penetration can be performed simultaneously. - Highlights: • We estimated the carbonation depth and the apparent diffusion coefficient of chlorine sodium in the reinforced concrete with cracking damage by LIBS. • Two-dimensional profile measurement of the emission intensity in each element was performed to visualize the chloride penetration and the carbonation in the reinforced concrete. • Apparent diffusion coefficient of chlorine and sodium can be estimated using the Fick

  1. A probabilistic approach for assessing landslide-triggering event rainfall in Papua New Guinea, using TRMM satellite precipitation estimates

    Science.gov (United States)

    Robbins, J. C.

    2016-10-01

    Large and numerous landslides can result in widespread impacts which are felt particularly strongly in the largely subsistence-orientated communities residing in the most landslide-prone areas of Papua New Guinea (PNG). Understanding the characteristics of rainfall preceding these landslide events is essential for the development of appropriate early warning systems and forecasting models. Relationships between rainfall and landslides are frequently complex and uncertainties tend to be amplified by inconsistent and incomplete landslide catalogues and sparse rainfall data availability. To address some of these uncertainties a modified Bayesian technique has been used, in conjunction with the multiple time frames method, to produce thresholds of landslide probability associated with rainfall events of specific magnitude and duration. Satellite-derived precipitation estimates have been used to derive representative rainfall accumulations and intensities over a range of different rainfall durations (5, 10, 15, 30, 45, 60, 75 and 90 days) for rainfall events which resulted in landslides and those which did not result in landslides. Of the two parameter combinations (accumulation-duration and intensity-duration) analysed, rainfall accumulation and duration provide the best scope for identifying probabilistic thresholds for use in landslide warning and forecasting in PNG. Analysis of historical events and rainfall characteristics indicates that high accumulation (>250 mm), shorter duration (75 days), high accumulation (>1200 mm) rainfall events are more likely to lead to moderate- to high-impact landslides. This analysis has produced the first proxy probability thresholds for landslides in PNG and their application within an early warning framework has been discussed.

  2. Comparison of different statistical downscaling methods to estimate changes in hourly extreme precipitation using RCM projections from ENSEMBLES

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Gregersen, Ida Bülow; Rosbjerg, Dan;

    2015-01-01

    Changes in extreme precipitation are expected to be one of the most important impacts of climate change in cities. Urban floods are mainly caused by short duration extreme events. Hence, robust information on changes in extreme precipitation at high-temporal resolution is required for the design...... of climate change adaptation measures. However, the quantification of these changes is challenging and subject to numerous uncertainties. This study assesses the changes and uncertainties in extreme precipitation at hourly scale over Denmark. It explores three statistical downscaling approaches: a delta...

  3. Improved radar data processing algorithms for quantitative rainfall estimation in real time.

    Science.gov (United States)

    Krämer, S; Verworn, H R

    2009-01-01

    This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time.

  4. Improved quantitative visualization of hypervelocity flow through wavefront estimation based on shadow casting of sinusoidal gratings.

    Science.gov (United States)

    Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan

    2016-08-01

    A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts.

  5. Quantitative estimation of brain atrophy and function with PET and MRI two-dimensional projection images

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko [Waseda Univ., Tokyo (Japan). School of Science and Engineering; Toyama, Hinako; Ishii, Kenji; Senda, Michio

    2001-05-01

    The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or {sup 18}F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)

  6. Quantitative Analysis of VIIRS DNB Nightlight Point Source for Light Power Estimation and Stability Monitoring

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2014-12-01

    Full Text Available The high sensitivity and advanced onboard calibration on the Visible Infrared Imaging Radiometer Suite (VIIRS Day/Night Band (DNB enables accurate measurements of low light radiances which leads to enhanced quantitative applications at night. The finer spatial resolution of DNB also allows users to examine social economic activities at urban scales. Given the growing interest in the use of the DNB data, there is a pressing need for better understanding of the calibration stability and absolute accuracy of the DNB at low radiances. The low light calibration accuracy was previously estimated at a moderate 15% using extended sources while the long-term stability has yet to be characterized. There are also several science related questions to be answered, for example, how the Earth’s atmosphere and surface variability contribute to the stability of the DNB measured radiances; how to separate them from instrument calibration stability; whether or not SI (International System of Units traceable active light sources can be designed and installed at selected sites to monitor the calibration stability, radiometric and geolocation accuracy, and point spread functions of the DNB; furthermore, whether or not such active light sources can be used for detecting environmental changes, such as aerosols. This paper explores the quantitative analysis of nightlight point sources, such as those from fishing vessels, bridges, and cities, using fundamental radiometry and radiative transfer, which would be useful for a number of applications including search and rescue in severe weather events, as well as calibration/validation of the DNB. Time series of the bridge light data are used to assess the stability of the light measurements and the calibration of VIIRS DNB. It was found that the light radiant power computed from the VIIRS DNB data matched relatively well with independent assessments based on the in situ light installations, although estimates have to be

  7. Relationships between statistics of rainfall extremes and mean annual precipitation: an application for design-storm estimation in northern central Italy

    Directory of Open Access Journals (Sweden)

    G. Di Baldassarre

    2006-01-01

    Full Text Available Several hydrological analyses need to be founded on a reliable estimate of the design storm, which is the expected rainfall depth corresponding to a given duration and probability of occurrence, usually expressed in terms of return period. The annual series of precipitation maxima for storm duration ranging from 15 min to 1 day, observed at a dense network of raingauges sited in northern central Italy, are analyzed using an approach based on L-moments. The analysis investigates the statistical properties of rainfall extremes and detects significant relationships between these properties and the mean annual precipitation (MAP. On the basis of these relationships, we developed a regional model for estimating the rainfall depth for a given storm duration and recurrence interval in any location of the study region. The applicability of the regional model was assessed through Monte Carlo simulations. The uncertainty of the model for ungauged sites was quantified through an extensive cross-validation.

  8. Combining weather radar nowcasts and numerical weather prediction models to estimate short-term quantitative precipitation and uncertainty

    DEFF Research Database (Denmark)

    Jensen, David Getreuer

    -TREC based REM. The filter is calibrated against atmospheric observations of radial velocity measured by a Doppler radar. The results from pooled skill scores from 16 events show only a slight improvement. The positive contribution, from applying Kalman filtering, is increased stability computed...

  9. Quantitative reconstruction of precipitation changes on the NE Tibetan Plateau since the Last Glacial Maximum – extending the concept of pollen source-area to pollen-based climate reconstructions from large lakes

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2013-06-01

    Full Text Available Pollen records from large lakes have been used for quantitative palaeoclimate reconstruction but the influences that lake-size (as a result of species-specific variations in pollen dispersal patterns and taphonomy have on these climatic signals have not previously been systematically investigated. We introduce the concept of pollen source-area to pollen-based climate calibration using the climate history of the north-eastern Tibetan Plateau as our study area. We present a pollen data-set collected from large lakes in the arid to semi-arid region of Central Asia. The influences that lake size and the inferred pollen source-areas have on pollen compositions have been investigated through comparisons with pollen assemblages in neighbouring lakes of various sizes. Modern pollen samples collected from different parts of Lake Donggi Cona (in the north-eastern part of the Tibetan Plateau reveal variations in pollen assemblages within this large lake, which are interpreted in terms of the species-specific dispersal and depositional patterns for different types of pollen, and in terms of fluvial input components. We have estimated the pollen source-area for each lake individually and used this information to infer modern climate data with which to then develop a modern calibration data-set, using both the Multivariate Regression Tree (MRT and Weighted-Averaging Partial Least Squares (WA-PLS approaches. Fossil pollen data from Lake Donggi Cona have been used to reconstruct the climate history of the north-eastern part of the Tibetan Plateau since the Last Glacial Maximum (LGM. The mean annual precipitation was quantitatively reconstructed using WA-PLS: extremely dry conditions are found to have dominated the LGM, with annual precipitation of around 100 mm, which is only 32% of present-day precipitation. A gradually increasing trend in moisture conditions during the Late Glacial is terminated by an abrupt reversion to a dry phase that lasts for about 1000

  10. Quantitative estimation of groundwater recharge ratio along the riparian of the Yellow River.

    Science.gov (United States)

    Yan, Zhang; Fadong, Li; Jing, Li; Qiang, Liu; Guangshuai, Zhao

    2013-01-01

    Quantitative estimation of groundwater recharge is crucial for limited water resources management. A combination of isotopic and chemical indicators has been used to evaluate the relationship between surface water, groundwater, and rainfall around the riparian of the Yellow River in the North China Plain (NCP). The ion molar ratio of sodium to chloride in surface- and groundwater is 0.6 and 0.9, respectively, indicating cation exchange of Ca(2+) and/or Mg(2+) for Na(+) in groundwater. The δD and δ(18)O values in rainfall varied from -64.4 to -33.4‰ and from -8.39 to -4.49‰. The groundwater samples have δD values in the range of -68.7 to -58.0‰ and δ(18)O from -9.29 to -6.85‰. The δ(18)O and δD in surface water varied from -8.51 to -7.23‰ and from -64.42 to -53.73‰. The average values of both δD and δ(18)O from surface water are 3.92‰ and 0.57‰, respectively, higher compared to groundwater. Isotopic composition indicated that the groundwater in the riparian area of the Yellow River was influenced by heavy rainfall events and seepage of surface water. The mass balance was applied for the first time to estimate the amount of recharge, which is probably 6% and 94% of the rainfall and surface water, respectively.

  11. Precipitation-induced runoff and leaching from milled peat mining mires by peat types : a comparative method for estimating the loading of water bodies during peat pruduction

    OpenAIRE

    SvahnbÀck, Lasse

    2007-01-01

    Precipitation-induced runoff and leaching from milled peat mining mires by peat types: a comparative method for estimating the loading of water bodies during peat production. This research project in environmental geology has arisen out of an observed need to be able to predict more accurately the loading of watercourses with detrimental organic substances and nutrients from already existing and planned peat production areas, since the authorities capacity for insisting on such predicti...

  12. A new TLC bioautographic assay for qualitative and quantitative estimation of lipase inhibitors.

    Science.gov (United States)

    Tang, Jihe; Zhou, Jinge; Tang, Qingjiu; Wu, Tao; Cheng, Zhihong

    2016-01-01

    Lipase inhibitory assays based on TLC bioautography have made recent progress; however, an assay with greater substrate specificity and quantitative capabilities would advance the efficacy of this particular bioassay. To address these limitations, a new TLC bioautographic assay for detecting lipase inhibitors was developed and validated in this study. The new TLC bioautographic assay was based on reaction of lipase with β-naphthyl myristate and the subsequent formation of the purple dye between β-naphthol and Fast Blue B salt (FBB). The relative lipase inhibitory capacity (RLIC) was determined by a TLC densitometry with fluorescence detection, expressed as orlistat equivalents in millimoles on a per sample weight basis. Six pure compounds and three natural extracts were evaluated for their potential lipase inhibitory activities by this TLC bioautographic assay. The β-naphthyl myristate as the substrate improved the detection sensitivity and specificity significantly. The limit of detection (LOD) of this assay was 0.01 ng for orlistat, the current treatment for obesity. This assay has acceptable accuracy (92.07-105.39%), intra-day and inter-day precisions [relative standard deviation (RSD), 2.64-4.40%], as well as intra-plate and inter-plate precisions (RSD, 1.8-4.9%). The developed method is rapid, simple, stable, and specific for screening and estimation of the potential lipase inhibitors. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Estimating background-subtracted fluorescence transients in calcium imaging experiments: a quantitative approach.

    Science.gov (United States)

    Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe

    2013-08-01

    Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies.

  14. Estimating the Quantitative Demand of NOAC Antidote Doses on Stroke Units.

    Science.gov (United States)

    Pfeilschifter, Waltraud; Farahmand, Dana; Niemann, Daniela; Ikenberg, Benno; Hohmann, Carina; Abruscato, Mario; Thonke, Sven; Strzelczyk, Adam; Hedtmann, Günther; Neumann-Haefelin, Tobias; Kollmar, Rainer; Singer, Oliver C; Ferbert, Andreas; Steiner, Thorsten; Steinmetz, Helmuth; Reihs, Anke; Misselwitz, Björn; Foerch, Christian

    2016-01-01

    The first specific antidote for non-vitamin K antagonist oral anticoagulants (NOAC) has recently been approved. NOAC antidotes will allow specific treatment for 2 hitherto problematic patient groups: patients with oral anticoagulant therapy (OAT)-associated intracerebral hemorrhage (ICH) and maybe also thrombolysis candidates presenting on oral anticoagulation (OAT). We aimed to estimate the frequency of these events and hence the quantitative demand of antidote doses on a stroke unit. We extracted data of patients with acute ischemic stroke and ICH (demand for NOAC antidote doses on stroke units. Eighteen percent of ICH patients within 6 h of symptom onset or an unknown symptom onset were on OAT. Given a NOAC share at admission of 40%, about 7% of all ICH patients may qualify for NOAC reversal therapy. Thirteen percent of ischemic stroke patients admitted within 4 h presented on anticoagulation. Given the availability of an appropriate antidote, a NOAC share of 50% could lead to a 6.1% increase in thrombolysis rate. Stroke units serving populations with a comparable demographic structure should prepare to treat up to 1% of all acute ischemic stroke patients and 7% of all acute ICH patients with NOAC antidotes. These numbers may increase with the mounting prevalence of atrial fibrillation and an increasing use of NOAC. © 2016 S. Karger AG, Basel.

  15. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade.

    Science.gov (United States)

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-22

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  16. Quantitative estimation of concentrations of dissolved rare earth elements using reflectance spectroscopy

    Science.gov (United States)

    Dai, Jingjing; Wang, Denghong; Wang, Runsheng; Chen, Zhenghui

    2013-01-01

    Characteristic spectral parameters such as the wavelength and depth of absorption bands are widely used to quantitatively estimate the composition of samples from hyperspectral reflectance data in soil science, mineralogy as well as vegetation study. However, little research has been conducted on the spectral characteristic of rare earth elements (REE) and their relationship with chemical composition of aqueous solutions. Reflectance spectra of ore leachate solutions and contaminated stream water from a few REE mines in the Jiangxi Province, China, are studied for the first time in this work. The results demonstrate that the six diagnostic absorption features of the rare earths are recognized in visible and near-infrared wavelengths at 574, 790, 736, 520, 861, and 443 nm. The intensity of each of these six absorption bands is linearly correlated with the abundance of total REE, with the r2 value >0.95 and the detection limit at ≥75,000 μg/L. It is suggested that reflectance spectroscopy provides an ideal routine analytical tool for characterizing leachate samples. The outcome of this study also has implications for monitoring the environmental effect of REE mining, in particular in stream water systems by hyperspectral remote sensing.

  17. The estimation of quantitative parameters of oligonucleotides immobilization on mica surface

    Science.gov (United States)

    Sharipov, T. I.; Bakhtizin, R. Z.

    2017-05-01

    Immobilization of nucleic acids on the surface of various materials is increasingly being used in research and some practical applications. Currently, the DNA chip technology is rapidly developing. The basis of the immobilization process can be both physical adsorption and chemisorption. A useful way to control the immobilization of nucleic acids on a surface is to use atomic force microscopy. It allows you to investigate the topography of the surface by its direct imaging with high resolution. Usually, to fix the DNA on the surface of mica are used cations which mediate the interaction between the mica surface and the DNA molecules. In our work we have developed a method for estimation of quantitative parameter of immobilization of oligonucleotides is their degree of aggregation depending on the fixation conditions on the surface of mica. The results on study of aggregation of oligonucleotides immobilized on mica surface will be presented. The single oligonucleotides molecules have been imaged clearly, whereas their surface areas have been calculated and calibration curve has been plotted.

  18. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade

    Science.gov (United States)

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1–3.9 ppm or 3–9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990–2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  19. Utilization of gel electrophoreses for the quantitative estimation of digestive enzyme papain

    Directory of Open Access Journals (Sweden)

    Magdy M. Muharram

    2017-03-01

    Full Text Available SDS-PAGE densitometric method for analysis of papain in pharmaceutical formulations was developed and validated for the first time. Standard and samples were mixed with SDS sample buffer and denatured at 95 °C for 5 min and the gel was run at 20 mA and 200 V for 30–40 min in SDS-PAGE buffer. Gels were stained in Coomassie blue solution and distained by 5% methanol and 10% acetic acid. Destained gels were imaged and analyzed using the ChemiDoc™ XRS+ System. Bands of papain appeared at Rf value 0.78 ± 0.03 corresponding to molecular weight 23406 Da between proteins with molecular weight 31,000 and 21,500 Da of the broad range protein standard. The generated calibration curve was used for quantitative estimation of papain in pharmaceutical formulations. The developed method was validated for precision, accuracy, specificity and robustness as described by the ICH guidelines. The proposed method gives an alternative approach for enzymes and protein analysis.

  20. An Experimental Study for Quantitative Estimation of Rebar Corrosion in Concrete Using Ground Penetrating Radar

    Directory of Open Access Journals (Sweden)

    Md Istiaque Hasan

    2016-01-01

    Full Text Available Corrosion of steel rebar in reinforced concrete is one the most important durability issues in the service life of a structure. In this paper, an investigation is conducted to find out the relationship between the amount of reinforced concrete corrosion and GPR maximum positive amplitude. Accelerated corrosion was simulated in the lab by impressing direct current into steel rebar that was submerged in a 5% salt water solution. The amount of corrosion was varied in the rebars with different levels of mass loss ranging from 0% to 45%. The corroded rebars were then placed into three different oil emulsion tanks having different dielectric properties similar to concrete. The maximum amplitudes from the corroded bars were recorded. A linear relationship between the maximum positive amplitudes and the amount of corrosion in terms of percentage loss of area was observed. It was proposed that the relationship between the GPR maximum amplitude and the amount of corrosion can be used as a basis of a NDE technique of quantitative estimation of corrosion.

  1. Improved infrared precipitation estimation approaches based on k-means clustering: Application to north Algeria using MSG-SEVIRI satellite data

    Science.gov (United States)

    Mokdad, Fatiha; Haddad, Boualem

    2017-06-01

    In this paper, two new infrared precipitation estimation approaches based on the concept of k-means clustering are first proposed, named the NAW-Kmeans and the GPI-Kmeans methods. Then, they are adapted to the southern Mediterranean basin, where the subtropical climate prevails. The infrared data (10.8 μm channel) acquired by MSG-SEVIRI sensor in winter and spring 2012 are used. Tests are carried out in eight areas distributed over northern Algeria: Sebra, El Bordj, Chlef, Blida, Bordj Menael, Sidi Aich, Beni Ourthilane, and Beni Aziz. The validation is performed by a comparison of the estimated rainfalls to rain gauges observations collected by the National Office of Meteorology in Dar El Beida (Algeria). Despite the complexity of the subtropical climate, the obtained results indicate that the NAW-Kmeans and the GPI-Kmeans approaches gave satisfactory results for the considered rain rates. Also, the proposed schemes lead to improvement in precipitation estimation performance when compared to the original algorithms NAW (Nagri, Adler, and Wetzel) and GPI (GOES Precipitation Index).

  2. Advantages of using satellite soil moisture estimates over precipitation products to assess regional vegetation water availability and activity

    Science.gov (United States)

    Chen, Tiexi

    2017-04-01

    To improve the understanding of water-vegetation relationships, direct comparative studies assessing the utility of satellite remotely sensed soil moisture, gridded precipitation products, and land surface model output are needed. A case study was investigated for a water-limited, lateral inflow receiving area in northeastern Australia during December 2008 to May 2009. In January 2009, monthly precipitation showed strong positive anomalies, which led to strong positive soil moisture anomalies. The precipitation anomalies disappeared within a month. In contrast, the soil moisture anomalies persisted for months. Positive anomalies of Normalized Difference Vegetation Index (NDVI) appeared in February, in response to water supply, and then persisted for several months. In addition to these temporal characteristics, the spatial patterns of NDVI anomalies were more similar to soil moisture patterns than to those of precipitation and land surface model output. The long memory of soil moisture mainly relates to the presence of clay-rich soils. Modeled soil moisture from four of five global land surface models failed to capture the memory length of soil moisture and all five models failed to present the influence of lateral inflow. This case study indicates that satellite-based soil moisture is a better predictor of vegetation water availability than precipitation in environments having a memory of several months and thus is able to persistently affect vegetation dynamics. These results illustrate the usefulness of satellite remotely sensed soil moisture in ecohydrology studies. This case study has the potential to be used as a benchmark for global land surface model evaluations. The advantages of using satellite remotely sensed soil moisture over gridded precipitation products are mainly expected in lateral-inflow and/or clay-rich regions worldwide.

  3. Fusing enhanced radar precipitation, in-situ hydrometeorological measurements and airborne LIDAR snowpack estimates in a hyper-resolution hydrologic model to improve seasonal water supply forecasts

    Science.gov (United States)

    Gochis, D. J.; Busto, J.; Howard, K.; Mickey, J.; Deems, J. S.; Painter, T. H.; Richardson, M.; Dugger, A. L.; Karsten, L. R.; Tang, L.

    2015-12-01

    Scarcity of spatially- and temporally-continuous observations of precipitation and snowpack conditions in remote mountain watersheds results in fundamental limitations in water supply forecasting. These limitationsin observational capabilities can result in strong biases in total snowmelt-driven runoff amount, the elevational distribution of runoff, river basin tributary contributions to total basin runoff and, equally important for water management, the timing of runoff. The Upper Rio Grande River basin in Colorado and New Mexico is one basin where observational deficiencies are hypothesized to have significant adverse impacts on estimates of snowpack melt-out rates and on water supply forecasts. We present findings from a coordinated observational-modeling study within Upper Rio Grande River basin whose aim was to quanitfy the impact enhanced precipitation, meteorological and snowpack measurements on the simulation and prediction of snowmelt driven streamflow. The Rio Grande SNOwpack and streamFLOW (RIO-SNO-FLOW) Prediction Project conducted enhanced observing activities during the 2014-2015 water year. Measurements from a gap-filling, polarimetric radar (NOXP) and in-situ meteorological and snowpack measurement stations were assimilated into the WRF-Hydro modeling framework to provide continuous analyses of snowpack and streamflow conditions. Airborne lidar estimates of snowpack conditions from the NASA Airborne Snow Observatory during mid-April and mid-May were used as additional independent validations against the various model simulations and forecasts of snowpack conditions during the melt-out season. Uncalibrated WRF-Hydro model performance from simulations and forecasts driven by enhanced observational analyses were compared against results driven by currently operational data inputs. Precipitation estimates from the NOXP research radar validate significantly better against independent in situ observations of precipitation and snow-pack increases

  4. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Institute of Scientific and Technical Information of China (English)

    Frank M. You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier

    2016-01-01

    The type 2 modified augmented design (MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html).

  5. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Institute of Scientific and Technical Information of China (English)

    Frank M.You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier

    2016-01-01

    The type 2 modified augmented design(MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters.Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline(http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html).

  6. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Directory of Open Access Journals (Sweden)

    Frank M. You

    2016-04-01

    Full Text Available The type 2 modified augmented design (MAD2 is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html.

  7. Comparison analysis of sampling methods to estimate regional precipitation based on the Kriging interpolation methods:A case of northwestern China

    Institute of Scientific and Technical Information of China (English)

    JinKui Wu; ShiWei Liu; LePing Ma; Jia Qin; JiaXin Zhou; Hong Wei

    2016-01-01

    The accuracy of spatial interpolation of precipitation data is determined by the actual spatial variability of the precipitation, the interpolation method, and the distribution of observatories whose selections are particularly important. In this paper, three spatial sampling programs, including spatial random sampling, spatial stratified sampling, and spatial sandwich sampling, are used to analyze the data from meteorological stations of northwestern China. We compared the accuracy of ordinary Kriging interpolation methods on the basis of the sampling results. The error values of the regional annual pre-cipitation interpolation based on spatial sandwich sampling, including ME (0.1513), RMSE (95.91), ASE (101.84), MSE (−0.0036), and RMSSE (1.0397), were optimal under the premise of abundant prior knowledge. The result of spatial stratified sampling was poor, and spatial random sampling was even worse. Spatial sandwich sampling was the best sampling method, which minimized the error of regional precipitation estimation. It had a higher degree of accuracy compared with the other two methods and a wider scope of application.

  8. Influence of Changing Hydrology on Pedogenic Calcite Precipitation in Vertisols, Dance Bayou, Brazoria County, Tx: Implications for Estimating Paleoatmospheric PCO2

    Science.gov (United States)

    Mintz, J. S.; Driese, S. G.; Ludvigson, G. A.; Breecker, D. O.

    2010-12-01

    Pedogenic (soil formed) calcites preserved in the sedimentary record enable estimation of paleoatmospheric pCO2 using the calcite paleobarometer. A fundamental assumption for applying this paleobarometer is that the calcite precipitated while the soil was in communication with the atmosphere so that atmospheric CO2 concentrations had a direct influence on the calcite δ13C value. Here we address the timing of calcite precipitation in relation to the soil saturation state and atmosphere connectivity in a modern Vertisol (smectitic, clay-rich soil, seasonally saturated) in Brazoria County, Texas. Luminescent phases of calcite growth have more negative δ13C values (avg. δ13C = -11.1 ‰ PDB) than the non-luminescent phases (avg. δ13C = -2.8 ‰ PDB). The luminescent phase formed during the water-saturated portion of the year limiting the soil connectivity with the atmosphere, minimizing the incorporation of atmospheric CO2 and negating its use for pCO2 estimations. The non-luminescent phase formed during the well-drained portion of the year when atmospheric CO2 mixed with soil-respired CO2 and is therefore useful for pCO2 estimation. From these results we present a model to independently test the saturation state of a paleosol at the time of pedogenic carbonate precipitation. Finally we calculate soil respiration rates that are an order of magnitude lower than those that are typically assumed in the paleobarometer equation. Many of the pCO2 estimates through the Phanerozoic are based on carbonate in paleo-Vertisols and may therefore be overestimated or potentially invalid. A) Comparisons of typical morphotype isotope values with interpreted soil conditions at the time of calcite precipitation. B) Model of carbonate precipitation of non-luminescent calcite nodules (dark circles) in the well-drained portion of the season (top) and luminescent calcite nodules (light circles) in the saturated portion of the season (bottom), in a Vertisol at Dance Bayou, Brazoria Co

  9. Scoping a field experiment: error diagnostics of TRMM precipitation radar estimates in complex terrain as a basis for IPHEx2014

    Directory of Open Access Journals (Sweden)

    Y. Duan

    2014-10-01

    in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the Southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter, and especially in the inner region. Although UND dominates the magnitude error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total consistent with regional hydrometeorology. The 2A25 V7 product underestimates low level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km of the terrain topography mask used to remove ground clutter effects. Precipitation associated with small-scale systems (2 and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to under-catch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts, and local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1 representation of concurrent vertically and horizontally varying microphysics; (2 non uniform beam filling (NUBF effects and ambiguity in the detection of bright band position; and

  10. Real-Time Global Flood Estimation Using Satellite-Based Precipitation and a Coupled Land Surface and Routing Model

    Science.gov (United States)

    Wu, Huan; Adler, Robert F.; Tian, Yudong; Huffman, George J.; Li, Hongyi; Wang, JianJian

    2014-01-01

    A widely used land surface model, the Variable Infiltration Capacity (VIC) model, is coupled with a newly developed hierarchical dominant river tracing-based runoff-routing model to form the Dominant river tracing-Routing Integrated with VIC Environment (DRIVE) model, which serves as the new core of the real-time Global Flood Monitoring System (GFMS). The GFMS uses real-time satellite-based precipitation to derive flood monitoring parameters for the latitude band 50 deg. N - 50 deg. S at relatively high spatial (approximately 12 km) and temporal (3 hourly) resolution. Examples of model results for recent flood events are computed using the real-time GFMS (http://flood.umd.edu). To evaluate the accuracy of the new GFMS, the DRIVE model is run retrospectively for 15 years using both research-quality and real-time satellite precipitation products. Evaluation results are slightly better for the research-quality input and significantly better for longer duration events (3 day events versus 1 day events). Basins with fewer dams tend to provide lower false alarm ratios. For events longer than three days in areas with few dams, the probability of detection is approximately 0.9 and the false alarm ratio is approximately 0.6. In general, these statistical results are better than those of the previous system. Streamflow was evaluated at 1121 river gauges across the quasi-global domain. Validation using real-time precipitation across the tropics (30 deg. S - 30 deg. N) gives positive daily Nash-Sutcliffe Coefficients for 107 out of 375 (28%) stations with a mean of 0.19 and 51% of the same gauges at monthly scale with a mean of 0.33. There were poorer results in higher latitudes, probably due to larger errors in the satellite precipitation input.

  11. First Evaluation of the Climatological Calibration Algorithm in the Real-time TMPA Precipitation Estimates over Two Basins at High and Low Latitudes

    Science.gov (United States)

    Yong, Bin; Ren, Liliang; Hong, Yang; Gourley, Jonathan; Tian, Yudong; Huffman, George J.; Chen, Xi; Wang, Weiguang; Wen, Yixin

    2013-01-01

    The TRMM Multi-satellite Precipitation Analysis (TMPA) system underwent a crucial upgrade in early 2009 to include a climatological calibration algorithm (CCA) to its realtime product 3B42RT, and this algorithm will continue to be applied in the future Global Precipitation Measurement era constellation precipitation products. In this study, efforts are focused on the comparison and validation of the Version 6 3B42RT estimates before and after the climatological calibration is applied. The evaluation is accomplished using independent rain gauge networks located within the high-latitude Laohahe basin and the low-latitude Mishui basin, both in China. The analyses indicate the CCA can effectively reduce the systematic errors over the low-latitude Mishui basin but misrepresent the intensity distribution pattern of medium-high rain rates. This behavior could adversely affect TMPA's hydrological applications, especially for extreme events (e.g., floods and landslides). Results also show that the CCA tends to perform slightly worse, in particular, during summer and winter, over the high-latitude Laohahe basin. This is possibly due to the simplified calibration-processing scheme in the CCA that directly applies the climatological calibrators developed within 40 degrees latitude to the latitude belts of 40 degrees N-50 degrees N. Caution should therefore be exercised when using the calibrated 3B42RT for heavy rainfall-related flood forecasting (or landslide warning) over high-latitude regions, as the employment of the smooth-fill scheme in the CCA bias correction could homogenize the varying rainstorm characteristics. Finally, this study highlights that accurate detection and estimation of snow at high latitudes is still a challenging task for the future development of satellite precipitation retrievals.

  12. Precipitation rates and atmospheric heat transport during the Cenomanian greenhouse warming in North America: Estimates from a stable isotope mass-balance model

    Science.gov (United States)

    Ufnar, David F.; Ludvigson, Greg A.; Gonzalez, L.; Grocke, D.R.

    2008-01-01

    Stable isotope mass-balance modeling results of meteoric ??18O values from the Cenomanian Stage of the Cretaceous Western Interior Basin (KWIB) suggest that precipitation and evaporation fluxes were greater than that of the present and significantly different from simulations of Albian KWIB paleohydrology. Sphaerosiderite meteoric ??18O values have been compiled from the Lower Tuscaloosa Formation of southwestern Mississippi (25??N paleolatitude), The Dakota Formation Rose Creek Pit, Fairbury Nebraska (35??N) and the Dunvegan Formation of eastern British Columbia (55??N paleolatitude). These paleosol siderite ??18O values define a paleolatitudinal gradient ranging from - 4.2??? VPDB at 25??N to - 12.5??? VPDB at 55??N. This trend is significantly steeper and more depleted than a modern theoretical siderite gradient (25??N: - 1.7???; 65??N: - 5.6??? VPDB ), and a Holocene meteoric calcite trend (27??N: - 3.6???; 67??N: - 7.4??? VPDB). The Cenomanian gradient is also comparatively steeper than the Albian trend determined for the KWIB in the mid- to high latitudes. The steep latitudinal trend in meteoric ??18O values may be the result of increased precipitation and evaporation fluxes (amount effects) under a more vigorous greenhouse-world hydrologic cycle. A stable-isotope mass-balance model has been used to generate estimates of precipitation and evaporation fluxes and precipitation rates. Estimates of Cenomanian precipitation rates based upon the mass-balance modeling of the KWIB range from 1400??mm/yr at 25??N paleolatitude to 3600??mm/yr at 45??N paleolatitude. The precipitation-evaporation (P-E) flux values were used to delineate zones of moisture surplus and moisture deficit. Comparisons between Cenomanian P-E and modern theoretical siderite, and Holocene calcite latitudinal trends shows an amplification of low-latitude moisture deficits between 5-25??N paleolatitude and moisture surpluses between 40-60??N paleolatitude. The low-latitude moisture deficits

  13. Recovering the primary geochemistry of Jack Hills zircons through quantitative estimates of chemical alteration

    Science.gov (United States)

    Bell, Elizabeth A.; Boehnke, Patrick; Harrison, T. Mark

    2016-10-01

    Despite the robust nature of zircon in most crustal and surface environments, chemical alteration, especially associated with radiation damaged regions, can affect its geochemistry. This consideration is especially important when drawing inferences from the detrital record where the original rock context is missing. Typically, alteration is qualitatively diagnosed through inspection of zircon REE patterns and the style of zoning shown by cathodoluminescence imaging, since fluid-mediated alteration often causes a flat, high LREE pattern. Due to the much lower abundance of LREE in zircon relative both to other crustal materials and to the other REE, disturbance to the LREE pattern is the most likely first sign of disruption to zircon trace element contents. Using a database of 378 (148 new) trace element and 801 (201 new) oxygen isotope measurements on zircons from Jack Hills, Western Australia, we propose a quantitative framework for assessing chemical contamination and exchange with fluids in this population. The Light Rare Earth Element Index is scaled on the relative abundance of light to middle REE, or LREE-I = (Dy/Nd) + (Dy/Sm). LREE-I values vary systematically with other known contaminants (e.g., Fe, P) more faithfully than other suggested proxies for zircon alteration (Sm/La, various absolute concentrations of LREEs) and can be used to distinguish primary compositions when textural evidence for alteration is ambiguous. We find that zircon oxygen isotopes do not vary systematically with placement on or off cracks or with degree of LREE-related chemical alteration, suggesting an essentially primary signature. By omitting zircons affected by LREE-related alteration or contamination by mineral inclusions, we present the best estimate for the primary igneous geochemistry of the Jack Hills zircons. This approach increases the available dataset by allowing for discrimination of on-crack analyses (and analyses with ambiguous or no information on spot placement or

  14. A quantitative framework for estimating risk of collision between marine mammals and boats

    Science.gov (United States)

    Martin, Julien; Sabatier, Quentin; Gowan, Timothy A.; Giraud, Christophe; Gurarie, Eliezer; Calleson, Scott; Ortega-Ortiz, Joel G.; Deutsch, Charles J.; Rycyk, Athena; Koslovsky, Stacie M.

    2016-01-01

    Speed regulations of watercraft in protected areas are designed to reduce lethal collisions with wildlife but can have economic consequences. We present a quantitative framework for investigating the risk of deadly collisions between boats and wildlife.

  15. On-line estimation of the dissolved zinc concentration during ZnS precipitation in a continuous stirred tank reactor (CSTR).

    Science.gov (United States)

    Grootscholten, T I M; Keesman, K J; Lens, P N L

    2008-01-01

    In this paper a method is presented to estimate the reaction term of zinc sulphide precipitation and the zinc concentration in a CSTR, using the read-out signal of a sulphide selective electrode. The reaction between zinc and sulphide is described by a non-linear model and therefore classical observer theory cannot be applied directly, as this theory was initially developed for linear systems. However, by linear reparametrization of this non-linear system, the linear observer theory can be applied in an effective way. This is illustrated by a zinc sulphide example using real data.

  16. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    Science.gov (United States)

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are

  17. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    Science.gov (United States)

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.

  18. Rainout assessment: the ACRA system and summaries of simulation results. [Computer systems to estimate threats from precipitation scavenging of radioactive debris clouds from nuclear weapons

    Energy Technology Data Exchange (ETDEWEB)

    Watson, C.W.; Barr, S.; Allenson, R.E.

    1977-09-01

    A generalized, three-dimensional, integrated computer code system was developed to estimate collateral-damage threats from precipitation-scavenging (rainout) of airborne debris-clouds from defensive tactical nuclear engagements. This code system, called ACRA for Atmospheric-Contaminant Rainout Assessment, is based on Monte Carlo statistical simulation methods that allow realistic, unbiased simulations of probabilistic storm, wind, and precipitation fields that determine actual magnitudes and probabilities of rainout threats. Detailed models (or data bases) are included for synoptic-scale storm and wind fields; debris transport and dispersal (with the roles of complex flow fields, time-dependent diffusion, and multidimensional shear effects accounted for automatically); microscopic debris-precipitation interactions and scavenging probabilities; air-to-ground debris transport; local demographic features, for assessing actual threats to populations; and nonlinear effects accumulations from multishot scenarios. We simulated several hundred representative shots for West European scenarios and climates to study single-shot and multishot sensitivities of rainout effects to variations in pertinent physical variables.

  19. Estimation of extreme floods : At-site flood estimates by application of a hydrological model and long synthetic series of precipitation and temperature

    OpenAIRE

    Barkved, Line Johanne

    2007-01-01

    Estimation of extreme floods, based on statistical analyses of observed values of flood series, is associated with several problems leading to high uncertainty in the flood estimates. In particular is this related to the length of the observation series and the extrapolation outside the range of observed values, when estimating the rare extreme floods. In many cases, short hydrological records are the common rule rather than the exceptions. This thesis targets at-site, single station, fr...

  20. Bone Structure and Estimated Bone Strength in Obese Patients Evaluated by High-Resolution Peripheral Quantitative Computed Tomography

    DEFF Research Database (Denmark)

    Andersen, Stine; Frederiksen, Katrine Diemer; Hansen, Stinus;

    2014-01-01

    Obesity is associated with high bone mineral density (BMD), but whether obesity-related higher bone mass increases bone strength and thereby protect against fractures is uncertain. We estimated effects of obesity on bone microarchitecture and estimated strength in 36 patients (12 males and 24...... females, age 25-56 years and BMI 33.2-57.6 kg/m(2)) matched with healthy controls (age 25-54 years and BMI 19.5-24.8 kg/m(2)) in regard to gender, menopausal status, age (±6 years) and height (±6 cm) using high resolution peripheral quantitative computed tomography and dual energy X-ray absorptiometry...

  1. Assessing the potential of satellite-based precipitation estimates for flood frequency analysis in ungauged or poorly gauged tributaries of China's Yangtze River basin

    Science.gov (United States)

    Gao, Zhen; Long, Di; Tang, Guoqiang; Zeng, Chao; Huang, Jiesheng; Hong, Yang

    2017-07-01

    Flood frequency analysis (FFA) is critical for water resources engineering projects, particularly the design of hydraulic structures such as dams and reservoirs. However, it is often difficult to implement FFA in ungauged or poorly gauged basins because of the lack of consistent and long-term records of streamflow observations. The objective of this study was to evaluate the utility of satellite-based precipitation estimates for performing FFA in two presumably ungauged tributaries, the Jialing and Tuojiang Rivers, of the upper Yangtze River. Annual peak flow series were simulated using the Coupled Routing and Excess STorage (CREST) hydrologic model. Flood frequency was estimated by fitting the Pearson type III distribution of both observed and modeled streamflow with historic floods. Comparison of satellite-based precipitation products with a ground-based daily precipitation dataset for the period 2002-2014 reveals that 3B42V7 outperformed 3B42RT. The 3B42V7 product also shows consistent reliability in streamflow simulation and FFA (e.g., relative errors -20%-5% in the Jialing River). The results also indicate that complex terrain, drainage area, and reservoir construction are important factors that impact hydrologic model performance. The larger basin (156,736 km2) is more likely to produce satisfactory results than the small basin (19,613 km2) under similar circumstances (e.g., Jialing/Tuojiang calibrated by 3B42V7 for the calibration period: NSCE = 0.71/0.56). Using the same calibrated parameter sets from the entire Jialing River basin, the 3B42V7/3B42RT-driven hydrologic model performs better for two tributaries of the Jialing River (e.g., for the calibration period, NSCE = 0.71/0.60 in the Qujiang River basin and 0.54/0.38 in the Fujiang River basin) than for the upper mainstem of the Jialing River (NSCE = 0.34/0.32), which has more cascaded reservoirs with all these tributaries treated as ungauged basins for model validation. Overall, this study underscores

  2. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    Science.gov (United States)

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  3. Estimating distributions out of qualitative and (semi)quantitative microbiological contamination data for use in risk assessment.

    Science.gov (United States)

    Busschaert, P; Geeraerd, A H; Uyttendaele, M; Van Impe, J F

    2010-04-15

    A framework using maximum likelihood estimation (MLE) is used to fit a probability distribution to a set of qualitative (e.g., absence in 25 g), semi-quantitative (e.g., presence in 25 g and absence in 1g) and/or quantitative test results (e.g., 10 CFU/g). Uncertainty about the parameters of the variability distribution is characterized through a non-parametric bootstrapping method. The resulting distribution function can be used as an input for a second order Monte Carlo simulation in quantitative risk assessment. As an illustration, the method is applied to two sets of in silico generated data. It is demonstrated that correct interpretation of data results in an accurate representation of the contamination level distribution. Subsequently, two case studies are analyzed, namely (i) quantitative analyses of Campylobacter spp. in food samples with nondetects, and (ii) combined quantitative, qualitative, semiquantitative analyses and nondetects of Listeria monocytogenes in smoked fish samples. The first of these case studies is also used to illustrate what the influence is of the limit of quantification, measurement error, and the number of samples included in the data set. Application of these techniques offers a way for meta-analysis of the many relevant yet diverse data sets that are available in literature and (inter)national reports of surveillance or baseline surveys, therefore increases the information input of a risk assessment and, by consequence, the correctness of the outcome of the risk assessment.

  4. Reproducibility of CSF quantitative culture methods for estimating rate of clearance in cryptococcal meningitis.

    Science.gov (United States)

    Dyal, Jonathan; Akampurira, Andrew; Rhein, Joshua; Morawski, Bozena M; Kiggundu, Reuben; Nabeta, Henry W; Musubire, Abdu K; Bahr, Nathan C; Williams, Darlisha A; Bicanic, Tihana; Larsen, Robert A; Meya, David B; Boulware, David R

    2016-05-01

    Quantitative cerebrospinal fluid (CSF) cultures provide a measure of disease severity in cryptococcal meningitis. The fungal clearance rate by quantitative cultures has become a primary endpoint for phase II clinical trials. This study determined the inter-assay accuracy of three different quantitative culture methodologies. Among 91 participants with meningitis symptoms in Kampala, Uganda, during August-November 2013, 305 CSF samples were prospectively collected from patients at multiple time points during treatment. Samples were simultaneously cultured by three methods: (1) St. George's 100 mcl input volume of CSF with five 1:10 serial dilutions, (2) AIDS Clinical Trials Group (ACTG) method using 1000, 100, 10 mcl input volumes, and two 1:100 dilutions with 100 and 10 mcl input volume per dilution on seven agar plates; and (3) 10 mcl calibrated loop of undiluted and 1:100 diluted CSF (loop). Quantitative culture values did not statistically differ between St. George-ACTG methods (P= .09) but did for St. George-10 mcl loop (P< .001). Repeated measures pairwise correlation between any of the methods was high (r≥0.88). For detecting sterility, the ACTG-method had the highest negative predictive value of 97% (91% St. George, 60% loop), but the ACTG-method had occasional (∼10%) difficulties in quantification due to colony clumping. For CSF clearance rate, St. George-ACTG methods did not differ overall (mean -0.05 ± 0.07 log10CFU/ml/day;P= .14) on a group level; however, individual-level clearance varied. The St. George and ACTG quantitative CSF culture methods produced comparable but not identical results. Quantitative cultures can inform treatment management strategies.

  5. Simultaneous imaging of aurora on small scale in OI (777.4 nm and N21P to estimate energy and flux of precipitation

    Directory of Open Access Journals (Sweden)

    N. Ivchenko

    2009-07-01

    Full Text Available Simultaneous images of the aurora in three emissions, N21P (673.0 nm, OII (732.0 nm and OI (777.4 nm, have been analysed; the ratio of atomic oxygen to molecular nitrogen has been used to provide estimates of the changes in energy and flux of precipitation within scale sizes of 100 m, and with temporal resolution of 32 frames per second. The choice of filters for the imagers is discussed, with particular emphasis on the choice of the atomic oxygen line at 777.4 nm as one of the three emissions measured. The optical measurements have been combined with radar measurements and compared with the results of an auroral model, hence showing that the ratio of emission rates OI/N2 can be used to estimate the energy within the smallest auroral structures. In the event chosen, measurements were made from mainland Norway, near Tromso, (69.6 N, 19.2 E. The peak energies of precipitation were between 1–15 keV. In a narrow curling arc, it was found that the arc filaments resulted from energies in excess of 10 keV and fluxes of approximately 7 mW/m2. These filaments of the order of 100 m in width were embedded in a region of lower energies (about 5–10 keV and fluxes of about 3 mW/m2. The modelling results show that the method promises to be most powerful for detecting low energy precipitation, more prevalent at the higher latitudes of Svalbard where the multispectral imager, known as ASK, is now installed.

  6. On the consideration of scaling properties of extreme rainfall in Madrid (Spain) for developing a generalized intensity-duration-frequency equation and assessing probable maximum precipitation estimates

    Science.gov (United States)

    Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel

    2016-11-01

    The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor (k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series.

  7. An operational procedure for precipitable and cloud liquid water estimate in non-raining conditions over sea Study on the assessment of the nonlinear physical inversion algorithm

    CERN Document Server

    Nativi, S; Mazzetti, P

    2004-01-01

    In a previous work, an operative procedure to estimate precipitable and liquid water in non-raining conditions over sea was developed and assessed. The procedure is based on a fast non-linear physical inversion scheme and a forward model; it is valid for most of satellite microwave radiometers and it also estimates water effective profiles. This paper presents two improvements of the procedure: first, a refinement to provide modularity of the software components and portability across different computation system architectures; second, the adoption of the CERN MINUIT minimisation package, which addresses the problem of global minimisation but is computationally more demanding. Together with the increased computational performance that allowed to impose stricter requirements on the quality of fit, these refinements improved fitting precision and reliability, and allowed to relax the requirements on the initial guesses for the model parameters. The re-analysis of the same data-set considered in the previous pap...

  8. The Global Precipitation Measurement Mission

    Science.gov (United States)

    Jackson, Gail

    2014-05-01

    The Global Precipitation Measurement (GPM) mission's Core satellite, scheduled for launch at the end of February 2014, is well designed estimate precipitation from 0.2 to 110 mm/hr and to detect falling snow. Knowing where and how much rain and snow falls globally is vital to understanding how weather and climate impact both our environment and Earth's water and energy cycles, including effects on agriculture, fresh water availability, and responses to natural disasters. The design of the GPM Core Observatory is an advancement of the Tropical Rainfall Measuring Mission (TRMM)'s highly successful rain-sensing package [3]. The cornerstone of the GPM mission is the deployment of a Core Observatory in a unique 65o non-Sun-synchronous orbit to serve as a physics observatory and a calibration reference to improve precipitation measurements by a constellation of 8 or more dedicated and operational, U.S. and international passive microwave sensors. The Core Observatory will carry a Ku/Ka-band Dual-frequency Precipitation Radar (DPR) and a multi-channel (10-183 GHz) GPM Microwave Radiometer (GMI). The DPR will provide measurements of 3-D precipitation structures and microphysical properties, which are key to achieving a better understanding of precipitation processes and improving retrieval algorithms for passive microwave radiometers. The combined use of DPR and GMI measurements will place greater constraints on possible solutions to radiometer retrievals to improve the accuracy and consistency of precipitation retrievals from all constellation radiometers. Furthermore, since light rain and falling snow account for a significant fraction of precipitation occurrence in middle and high latitudes, the GPM instruments extend the capabilities of the TRMM sensors to detect falling snow, measure light rain, and provide, for the first time, quantitative estimates of microphysical properties of precipitation particles. The GPM Core Observatory was developed and tested at NASA

  9. Estimating 13.8-GHz Path-Integrated Attenuation from 10.7-GHz Brightness Temperatures for the TRMM Combined PR-TMI Precipitation Algorithm.

    Science.gov (United States)

    Smith, Eric A.; Turk, F. Joseph; Farrar, Michael R.; Mugnai, Alberto; Xiang, Xuwu

    1997-04-01

    This study presents research in support of the design and implementation of a combined radar-radiometer algorithm to be used for precipitation retrieval during the Tropical Rainfall Measuring Mission (TRMM). The combined algorithm approach is expected to overcome various difficulties that arise with a radar-only approach, particularly related to estimates of path-integrated attenuation (PIA) along the TRMM radar beam. A technique is described for estimating PIA at the 13.8-GHz frequency of the TRMM precipitation radar (PR) from 10.7-GHz brightness temperature TB measurements obtained from the TRMM microwave imager. Because the PR measures at an attenuating frequency, an independent estimate of PIA is used to constrain the solution to the radar equation, which incorporates effects of attenuation propagation along a radar beam. Through the use of variational or probabilistic techniques, the independent PIA calculations provide a means to adjust for errors that accumulate in estimates of range-dependent rain rates at progressively increasing range positions from radar reflectivity vectors. The accepted radar approach for obtaining PIA from ocean-viewing radar reflectivity measurements is called the surface reference technique, a scheme based on the difference in ocean surface cross sections between cloud-free and raining radar pixels. This technique has encountered problems, which are discussed and analyzed with the aid of coordinated aircraft radar (Airborne Rain Mapping Radar) and radiometer (Advanced Microwave Precipitation Radiometer) measurements obtained during the west Pacific Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment in 1993. The derived relationship expressing 13.8-GHz PIAs as a function of 10.7-GHz TB's is based on statistical fitting of many thousands of radiative transfer (RTE) calculations in which the relevant physical and radiative parameters affecting transmission, absorption, and scattering in a raining column and

  10. Validation and measurement uncertainty estimation in food microbiology: differences between quantitative and qualitative methods

    Directory of Open Access Journals (Sweden)

    Vesna Režić Dereani

    2010-09-01

    Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.

  11. Toward a Quantitative Estimate of Future Heat Wave Mortality under Global Climate Change

    OpenAIRE

    Peng, Roger D.; Tebaldi, Claudia; McDaniel, Larry; Bobb, Jennifer; Dominici, Francesca; Bell, Michelle D.

    2010-01-01

    Background: Climate change is anticipated to affect human health by changing the distribution of known risk factors. Heat waves have had debilitating effects on human mortality, and global climate models predict an increase in the frequency and severity of heat waves. The extent to which climate change will harm human health through changes in the distribution of heat waves and the sources of uncertainty in estimating these effects have not been studied extensively. Objectives: We estimated t...

  12. Ultrasonic 3-D vector flow method for quantitative in vivo peak velocity and flow rate estimation

    DEFF Research Database (Denmark)

    Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed;

    2017-01-01

    Current clinical ultrasound systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the Transverse Oscillation (TO) method, a 32 x 32 element matrix array, and the experimental ultrasound scanner SARUS...... is presented. The aim of this paper is to estimate precise flow rates and peak velocities derived from 3-D vector flow estimates. The emission sequence provides 3-D vector flow estimates at up to 1.145 frames per second in a plane, and was used to estimate 3-D vector flow in a cross sectional image plane....... The method is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom (ø = 8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow...

  13. Quantitative analysis of the impacts of terrestrial environmental factors on precipitation variation over the Beibu Gulf Economic Zone in Coastal Southwest China

    Science.gov (United States)

    Zhao, Yinjun; Deng, Qiyu; Lin, Qing; Cai, Chunting

    2017-03-01

    Taking the Guangxi Beibu Gulf Economic Zone as the study area, this paper utilizes the geographical detector model to quantify the feedback effects from the terrestrial environment on precipitation variation from 1985 to 2010 with a comprehensive consideration of natural factors (forest coverage rate, vegetation type, terrain, terrestrial ecosystem types, land use and land cover change) and social factors (population density, farmland rate, GDP and urbanization rate). First, we found that the precipitation trend rate in the Beibu Gulf Economic Zone is between ‑47 and 96 mm/10a. Second, forest coverage rate change (FCRC), urbanization rate change (URC), GDP change (GDPC) and population density change (PDC) have a larger contribution to precipitation change through land-surface feedback, which makes them the leading factors. Third, the human element is found to primarily account for the precipitation changes in this region, as humans are the active media linking and enhancing these impact factors. Finally, it can be concluded that the interaction of impact factor pairs has a significant effect compared to the corresponding single factor on precipitation changes. The geographical detector model offers an analytical framework to reveal the terrestrial factors affecting the precipitation change, which gives direction for future work on regional climate modeling and analyses.

  14. A quantitative comparison of precipitation forecasts between the storm-scale numerical weather prediction model and auto-nowcast system in Jiangsu, China

    Science.gov (United States)

    Wang, Gaili; Yang, Ji; Wang, Dan; Liu, Liping

    2016-11-01

    Extrapolation techniques and storm-scale Numerical Weather Prediction (NWP) models are two primary approaches for short-term precipitation forecasts. The primary objective of this study is to verify precipitation forecasts and compare the performances of two nowcasting schemes: a Beijing Auto-Nowcast system (BJ-ANC) based on extrapolation techniques and a storm-scale NWP model called the Advanced Regional Prediction System (ARPS). The verification and comparison takes into account six heavy precipitation events that occurred in the summer of 2014 and 2015 in Jiangsu, China. The forecast performances of the two schemes were evaluated for the next 6 h at 1-h intervals using gridpoint-based measures of critical success index, bias, index of agreement, root mean square error, and using an object-based verification method called Structure-Amplitude-Location (SAL) score. Regarding gridpoint-based measures, BJ-ANC outperforms ARPS at first, but then the forecast accuracy decreases rapidly with lead time and performs worse than ARPS after 4-5 h of the initial forecast. Regarding the object-based verification method, most forecasts produced by BJ-ANC focus on the center of the diagram at the 1-h lead time and indicate high-quality forecasts. As the lead time increases, BJ-ANC overestimates precipitation amount and produces widespread precipitation, especially at a 6-h lead time. The ARPS model overestimates precipitation at all lead times, particularly at first.

  15. Estimation of areal precipitation based on rainfall data and X-band radar images in the Venero-Claro Basin (Ávila, Spain)

    Science.gov (United States)

    Guardiola-Albert, Carolina; River-Honegger, Carlos; Yagüe, Carlos; Agut, Robert Monjo i.; Díez-Herrero, Andrés; María Bodoque, José; José Tapiador, Francisco

    2015-04-01

    The aim of this work is to estimate the spatial-temporal rainfall during precipitation events with hydrological response in Venero-Claro Basin (Avila, Spain). In this small mountainous basin of 15km2, flood events of different magnitudes have been often registered. Therefore, rainfall estimation is essential to calibrate and validate hydrological models, and hence implies an improvement in the objectivity of risk studies and its predictive and preventive capacity. The geostatistical merging method of ordinary kriging of the errors (OKRE) has been applied. This technique has been already used by several authors to merge C-band radar and dense rain gauge networks. Here it is adapted to estimate hourly rainfall accumulations over the area with observations from one of the 5 existing X-band radar in Spain and 7 rain gauges located in the zone. Verification of the results has been performed through cross-validation comparing the estimation error of the OKRE with the one obtained adjusting the Marshall-Palmer relation. Analyzed errors are bias, the Hanseen-Kuiper coefficient and the relative mean root transformed error. Results have an average error of 15%, distinguishing quite well between dry and wet periods.

  16. Modeling real-time PCR kinetics: Richards reparametrized equation for quantitative estimation of European hake (Merluccius merluccius).

    Science.gov (United States)

    Sánchez, Ana; Vázquez, José A; Quinteiro, Javier; Sotelo, Carmen G

    2013-04-10

    Real-time PCR is the most sensitive method for detection and precise quantification of specific DNA sequences, but it is not usually applied as a quantitative method in seafood. In general, benchmark techniques, mainly cycle threshold (Ct), are the routine method for quantitative estimations, but they are not the most precise approaches for a standard assay. In the present work, amplification data from European hake (Merluccius merluccius) DNA samples were accurately modeled by three sigmoid reparametrized equations, where the lag phase parameter (λc) from the Richards equation with four parameters was demonstrated to be the perfect substitute for Ct for PCR quantification. The concentrations of primers and probes were subsequently optimized by means of that selected kinetic parameter. Finally, the linear correlation among DNA concentration and λc was also confirmed.

  17. Extrapolated withdrawal-interval estimator (EWE) algorithm: a quantitative approach to establishing extralabel withdrawal times.

    Science.gov (United States)

    Martín-Jiménez, Tomás; Baynes, Ronald E; Craigmill, Arthur; Riviere, Jim E

    2002-08-01

    The extralabel use of drugs can be defined as the use of drugs in a manner inconsistent with their FDA-approved labeling. The passage of the Animal Medicinal Drug Use Clarification Act (AMDUCA) in 1994 and its implementation by the FDA-Center for Veterinary Medicine in 1996 has allowed food animal veterinarians to use drugs legally in an extralabel manner, as long as an appropriate withdrawal period is established. The present study introduces and validates with simulated and experimental data the Extrapolated Withdrawal-Period Estimator (EWE) Algorithm, a procedure aimed at predicting extralabel withdrawal intervals (WDIs) based on the label and pharmacokinetic literature data contained in the Food Animal Residue Avoidance Databank (FARAD). This is the initial and first attempt at consistently obtaining WDI estimates that encompass a reasonable degree of statistical soundness. Data on the determination of withdrawal times after the extralabel use of the antibiotic oxytetracycline were obtained both with simulated disposition data and from the literature. A withdrawal interval was computed using the EWE Algorithm for an extralabel dose of 25 mg/kg (simulation study) and for a dose of 40 mg/kg (literature data). These estimates were compared with the withdrawal times computed with the simulated data and with the literature data, respectively. The EWE estimates of WDP for a simulated extralabel dose of 25 mg/kg was 39 days. The withdrawal time (WDT) obtained for this dose on a tissue depletion study was 39 days. The EWE estimate of WDP for an extralabel intramuscular dose of 40 mg/kg in cattle, based on the kinetic data contained in the FARAD database, was 48 days. The withdrawal time experimentally obtained for similar use of this drug was 49 days. The EWE Algorithm can obtain WDI estimates that encompass the same degree of statistical soundness as the WDT estimates, provided that the assumptions of the approved dosage regimen hold for the extralabel dosage regimen

  18. Modeling Bone Surface Morphology: A Fully Quantitative Method for Age-at-Death Estimation Using the Pubic Symphysis.

    Science.gov (United States)

    Slice, Dennis E; Algee-Hewitt, Bridget F B

    2015-07-01

    The pubic symphysis is widely used in age estimation for the adult skeleton. Standard practice requires the visual comparison of surface morphology against criteria representing predefined phases and the estimation of case-specific age from an age range associated with the chosen phase. Known problems of method and observer error necessitate alternative tools to quantify age-related change in pubic morphology. This paper presents an objective, fully quantitative method for estimating age-at-death from the skeleton, which exploits a variance-based score of surface complexity computed from vertices obtained from a scanner sampling the pubic symphysis. For laser scans from 41 modern American male skeletons, this method produces results that are significantly associated with known age-at-death (RMSE = 17.15 years). Chronological age is predicted, therefore, equally well, if not, better, with this robust, objective, and fully quantitative method than with prevailing phase-aging systems. This method contributes to forensic casework by responding to medico-legal expectations for evidence standards.

  19. Improved TLC Bioautographic Assay for Qualitative and Quantitative Estimation of Tyrosinase Inhibitors in Natural Products.

    Science.gov (United States)

    Zhou, Jinge; Tang, Qingjiu; Wu, Tao; Cheng, Zhihong

    2017-03-01

    TLC bioautography for tyrosinase inhibitors has made recent progress; however, an assay with a relative low consumption of enzyme and quantitative capability would greatly advance the efficacy of related TLC bioautographic assays. An improved TLC bioautographic assay for detecting tyrosinase inhibitors was developed and validated in this study. L-DOPA (better water-solubility than L-tyrosine) was used as the substrate instead of reported L-tyrosine. The effects of enzyme and substrate concentrations, reaction temperatures and times, and pH values of the reaction system as well as different plate types on the TLC bioautographic assay were optimised. The quantitative analysis was conducted by densitometric scanning of spot areas, and expressed as the relative tyrosinase inhibitory capacity (RTIC) using a positive control (kojic acid) equivalent. The limit of detection (LOD) of this assay was 1.0 ng for kojic acid. This assay has acceptable accuracy (101.73-102.90%), intra- and inter-day, and intra- and inter-plate precisions [relative standard deviation (RSD), less than 7.0%], and ruggedness (RSD, less than 3.5%). The consumption of enzyme (75 U/mL) is relatively low. Two tyrosinase inhibitory compounds including naringenin and 1-O-β-D-glucopyranosyl-4-allylbenzene have been isolated from Rhodiola sacra guided by this TLC bioautographic assay. Our improved assay is a relatively low-cost, sensitive, and quantitative method compared to the reported TLC bioautographic assays. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Statistical estimation of correlated genome associations to a quantitative trait network.

    Directory of Open Access Journals (Sweden)

    Seyoung Kim

    2009-08-01

    Full Text Available Many complex disease syndromes, such as asthma, consist of a large number of highly related, rather than independent, clinical or molecular phenotypes. This raises a new technical challenge in identifying genetic variations associated simultaneously with correlated traits. In this study, we propose a new statistical framework called graph-guided fused lasso (GFlasso to directly and effectively incorporate the correlation structure of multiple quantitative traits such as clinical metrics and gene expressions in association analysis. Our approach represents correlation information explicitly among the quantitative traits as a quantitative trait network (QTN and then leverages this network to encode structured regularization functions in a multivariate regression model over the genotypes and traits. The result is that the genetic markers that jointly influence subgroups of highly correlated traits can be detected jointly with high sensitivity and specificity. While most of the traditional methods examined each phenotype independently and combined the results afterwards, our approach analyzes all of the traits jointly in a single statistical framework. This allows our method to borrow information across correlated phenotypes to discover the genetic markers that perturb a subset of the correlated traits synergistically. Using simulated datasets based on the HapMap consortium and an asthma dataset, we compared the performance of our method with other methods based on single-marker analysis and regression-based methods that do not use any of the relational information in the traits. We found that our method showed an increased power in detecting causal variants affecting correlated traits. Our results showed that, when correlation patterns among traits in a QTN are considered explicitly and directly during a structured multivariate genome association analysis using our proposed methods, the power of detecting true causal SNPs with possibly pleiotropic

  1. THE QUADRANTS METHOD TO ESTIMATE QUANTITATIVE VARIABLES IN MANAGEMENT PLANS IN THE AMAZON

    Directory of Open Access Journals (Sweden)

    Gabriel da Silva Oliveira

    2015-12-01

    Full Text Available This work aimed to evaluate the accuracy in estimates of abundance, basal area and commercial volume per hectare, by the quadrants method applied to an area of 1.000 hectares of rain forest in the Amazon. Samples were simulated by random and systematic process with different sample sizes, ranging from 100 to 200 sampling points. The amounts estimated by the samples were compared with the parametric values recorded in the census. In the analysis we considered as the population all trees with diameter at breast height equal to or greater than 40 cm. The quadrants method did not reach the desired level of accuracy for the variables basal area and commercial volume, overestimating the observed values recorded in the census. However, the accuracy of the estimates of abundance, basal area and commercial volume was satisfactory for applying the method in forest inventories for management plans in the Amazon.

  2. Atom probe analysis of titanium hydride precipitates.

    Science.gov (United States)

    Takahashi, J; Kawakami, K; Otsuka, H; Fujii, H

    2009-04-01

    It is expected that the three-dimensional atom probe (3DAP) will be used as a tool to visualize the atomic scale of hydrogen atoms in steel is expected, due to its high spatial resolution and very low detection limit. In this paper, the first 3DAP analysis of titanium hydride precipitates in metal titanium is reported in terms of the quantitative detection of hydrogen. FIB fabrication techniques using the lift-out method have enabled the production of needle tips of hydride precipitates, of several tens of microns in size, within a titanium matrix. The hydrogen concentration estimated from 3DAP analysis was slightly smaller than that of the hydride phase predicted from the phase diagram. We discuss the origin of the difference between the experimental and predicted values and the performance of 3DAP for the quantitative detection of hydrogen.

  3. Exploiting an ensemble of regional climate models to provide robust estimates of projected changes in monthly temperature and precipitation probability distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Tapiador, Francisco J.; Sanchez, Enrique; Romera, Raquel (Inst. of Environmental Sciences, Univ. of Castilla-La Mancha (UCLM), 45071 Toledo (Spain)). e-mail: francisco.tapiador@uclm.es

    2009-07-01

    Regional climate models (RCMs) are dynamical downscaling tools aimed to improve the modelling of local physical processes. Ensembles of RCMs are widely used to improve the coarse-grain estimates of global climate models (GCMs) since the use of several RCMs helps to palliate uncertainties arising from different dynamical cores and numerical schemes methods. In this paper, we analyse the differences and similarities in the climate change response for an ensemble of heterogeneous RCMs forced by one GCM (HadAM3H), and one emissions scenario (IPCC's SRES-A2 scenario). As a difference with previous approaches using PRUDENCE database, the statistical description of climate characteristics is made through the spatial and temporal aggregation of the RCMs outputs into probability distribution functions (PDF) of monthly values. This procedure is a complementary approach to conventional seasonal analyses. Our results provide new, stronger evidence on expected marked regional differences in Europe in the A2 scenario in terms of precipitation and temperature changes. While we found an overall increase in the mean temperature and extreme values, we also found mixed regional differences for precipitation

  4. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    Science.gov (United States)

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.

  5. FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts

    Science.gov (United States)

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304

  6. Pion Fluctuation in High Energy Collisions - A Chaos-based Quantitative Estimation with Visibility Graph Technique

    CERN Document Server

    Bhaduri, Susmita

    2016-01-01

    We propose a new approach for studying pion fluctuation for deeper understanding of the dynamical process involved, from a perspective of fBm-based complex network analysis method called Visibility graph Analysis. This chaos-based, rigorous, non-linear technique is applied to study the erratic behavior of multipion production in \\textbf{$\\pi^{-}$-Ag/Br} interactions at $350$ GeV. This method can offer reliable results with finite data points. The \\textbf{Power of Scale-freeness of Visibility Graph} denoted by-\\textit{PSVG} is a measure of fractality, which can be used as a quantitative parameter for the assessment of the state of a chaotic system. The event-wise fluctuation of the multipion production process can be described by this parameter-\\textit{PSVG}. From the analysis of the \\textit{PSVG} parameter, we can quantitatively confirm that fractal behavior of the particle production process depends on the target excitation and also the fractality decreases with the increase of target excitation.

  7. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  8. Estimation of precipitation rates by measurements of {sup 36}Cl in the GRIP ice core with the PSI/ETH tandem accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, G.; Baumgartner, S.; Beer, J. [EAWAG, Duebendorf (Switzerland); Synal, H.A. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Suter, M. [Eidgenoessische Technische Hochschule, Zurich (Switzerland)

    1997-09-01

    Within the European Greenland ice core project (GRIP) {sup 36}Cl AMS measurements have been performed on ice core samples from Summit (Greenland, 73{sup o}N, 37{sup o}W). Most data analysed so far are from the lower part of the ice core. The {sup 36}Cl concentration is well correlated with {delta}{sup 18}O, which is considered as a proxy for paleotemperatures. Assuming that the deposition rate of radionuclides is independent of {delta}{sup 18}O, {sup 36}Cl is used to estimate the relationship between accumulation and {delta}{sup 18}O. The results confirm that the rapid changes of {delta}{sup 18}O, the so-called Dansgaard-Oeschger events, are also reflected in the precipitation rate. (author) 1 fig., 3 refs.

  9. [Quantitative estimation of CaO content in surface rocks using hyperspectral thermal infrared emissivity].

    Science.gov (United States)

    Zhang, Li-Fu; Zhang, Xue-Wen; Huang, Zhao-Qiang; Yang, Hang; Zhang, Fei-Zhou

    2011-11-01

    The objective of the present paper is to study the quantitative relationship between the CaO content and the thermal infrared emissivity spectra. The surface spectral emissivity of 23 solid rocks samples were measured in the field and the first derivative of the spectral emissivity was also calculated. Multiple linear regression (MLR), principal component analysis (PCR) and partial least squares regression (PLSR) were modeled and the regression results were compared. The results show that there is a good relationship between CaO content and thermal emissivity spectra features; emissivities become lower when CaO content increases in the 10.3-13 mm region; the first derivative spectra have a better predictive ability compared to the original emissivity spectra.

  10. Evaluation of radar-derived precipitation estimates using runoff simulation : report for the NFR Energy Norway funded project 'Utilisation of weather radar data in atmospheric and hydrological models'

    Energy Technology Data Exchange (ETDEWEB)

    Abdella, Yisak; Engeland, Kolbjoern; Lepioufle, Jean-Marie

    2012-11-01

    This report presents the results from the project called 'Utilisation of weather radar data in atmospheric and hydrological models' funded by NFR and Energy Norway. Three precipitation products (radar-derived, interpolated and combination of the two) were generated as input for hydrological models. All the three products were evaluated by comparing the simulated and observed runoff at catchments. In order to expose any bias in the precipitation inputs, no precipitation correction factors were applied. Three criteria were used to measure the performance: Nash, correlation coefficient, and bias. The results shows that the simulations with the combined precipitation input give the best performance. We also see that the radar-derived precipitation estimates give reasonable runoff simulation even without a region specific parameters for the Z-R relationship. All the three products resulted in an underestimation of the estimated runoff, revealing a systematic bias in measurements (e.g. catch deficit, orographic effects, Z-R relationships) that can be improved. There is an important potential of using radar-derived precipitation for simulation of runoff, especially in catchments without precipitation gauges inside.(Author)

  11. Study on Correlation and Quantitative Error Estimation Method Among the Splitting Shear Wave Identification Methods

    Institute of Scientific and Technical Information of China (English)

    Liu Xiqiang; Zhou Huilan; Li Hong; Gai Dianguang

    2000-01-01

    Based on the propagation characteristics of shear wave in the anisotropic layers, thecorrelation among several splitting shear-wave identification methods hasbeen studied. Thispaper puts forward the method estimating splitting shear-wave phases and its reliability byusing of the assumption that variance of noise and useful signal data obey normaldistribution. To check the validity of new method, the identification results and errorestimation corresponding to 95% confidence level by analyzing simulation signals have beengiven.

  12. A quantitative method for estimating cloud cover over tropical cyclones from satellite data

    OpenAIRE

    BALOGUN, E. E.

    2011-01-01

    A photometric method for quantifying cloud cover over tropical cyclones as observed from satellite photographs is presented. Two gridded photographs of tropical cyclones are analyzed by this method. On each photograph, nine concentric circles are drawn. The observed or reported centre of the cyclones is used as the centre for each set of concentric circles. Photometric estimates of cloud cover are made along the nine concentric circles. The principle of harmonic analysis is applied to the cl...

  13. [Non-parametric Bootstrap estimation on the intraclass correlation coefficient generated from quantitative hierarchical data].

    Science.gov (United States)

    Liang, Rong; Zhou, Shu-dong; Li, Li-xia; Zhang, Jun-guo; Gao, Yan-hui

    2013-09-01

    This paper aims to achieve Bootstraping in hierarchical data and to provide a method for the estimation on confidence interval(CI) of intraclass correlation coefficient(ICC).First, we utilize the mixed-effects model to estimate data from ICC of repeated measurement and from the two-stage sampling. Then, we use Bootstrap method to estimate CI from related ICCs. Finally, the influences of different Bootstraping strategies to ICC's CIs are compared. The repeated measurement instance show that the CI of cluster Bootsraping containing the true ICC value. However, when ignoring the hierarchy characteristics of data, the random Bootsraping method shows that it has the invalid CI. Result from the two-stage instance shows that bias observed between cluster Bootstraping's ICC means while the ICC of the original sample is the smallest, but with wide CI. It is necessary to consider the structure of data as important, when hierarchical data is being resampled. Bootstrapping seems to be better on the higher than that on lower levels.

  14. Quantitative Estimate of the Relation Between Rolling Resistance on Fuel Consumption of Class 8 Tractor Trailers Using Both New and Retreaded Tires (SAE Paper 2014-01-2425)

    Science.gov (United States)

    Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.

  15. Comparison of Myocardial Perfusion Estimates From Dynamic Contrast-Enhanced Magnetic Resonance Imaging With Four Quantitative Analysis Methods

    Science.gov (United States)

    Pack, Nathan A.; DiBella, Edward V. R.

    2012-01-01

    Dynamic contrast-enhanced MRI has been used to quantify myocardial perfusion in recent years. Published results have varied widely, possibly depending on the method used to analyze the dynamic perfusion data. Here, four quantitative analysis methods (two-compartment modeling, Fermi function modeling, model-independent analysis, and Patlak plot analysis) were implemented and compared for quantifying myocardial perfusion. Dynamic contrast-enhanced MRI data were acquired in 20 human subjects at rest with low-dose (0.019 ± 0.005 mmol/kg) bolus injections of gadolinium. Fourteen of these subjects were also imaged at adenosine stress (0.021 ± 0.005 mmol/kg). Aggregate rest perfusion estimates were not significantly different between all four analysis methods. At stress, perfusion estimates were not significantly different between two-compartment modeling, model-independent analysis, and Patlak plot analysis. Stress estimates from the Fermi model were significantly higher (~20%) than the other three methods. Myocardial perfusion reserve values were not significantly different between all four methods. Model-independent analysis resulted in the lowest model curve-fit errors. When more than just the first pass of data was analyzed, perfusion estimates from two-compartment modeling and model-independent analysis did not change significantly, unlike results from Fermi function modeling. PMID:20577976

  16. Quantitative assessment of soil parameter (KD and TC) estimation using DGT measurements and the 2D DIFS model.

    Science.gov (United States)

    Lehto, N J; Sochaczewski, L; Davison, W; Tych, W; Zhang, H

    2008-03-01

    Diffusive gradients in thin films (DGT) is a dynamic, in situ measuring technique that can be used to supply diverse information on concentrations and behaviour of solutes. When deployed in soils and sediments, quantitative interpretation of DGT measurements requires the use of a numerical model. An improved version of the DGT induced fluxes in soils and sediments model (DIFS), working in two dimensions (2D DIFS), was used to investigate the accuracy with which DGT measurements can be used to estimate the distribution coefficient for labile metal (KD) and the response time of the soil to depletion (TC). The 2D DIFS model was used to obtain values of KD and TC for Cd, Zn and Ni in three different soils, which were compared to values determined previously using 1D DIFS for these cases. While the 1D model was shown to provide reasonable estimates of KD, the 2D model refined the estimates of the kinetic parameters. Desorption rate constants were shown to be similar for all three metals and lower than previously thought. Calculation of an error function as KD and TC are systematically varied showed the spread of KD and TC values that fit the experimental data equally well. These automatically generated error maps reflected the quality of the data and provided an appraisal of the accuracy of parameter estimation. They showed that in some cases parameter accuracy could be improved by fitting the model to a sub-set of data.

  17. NOAA Climate Data Record (CDR) of Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN-CDR), Version 1 Revision 1

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — PERSIANN Precipitation Climate Data Record (PERSIANN-CDR) is a daily quasi-global precipitation product for the period of 1982 to 2011. The data covers from 60...

  18. Estimation of the Accuracy of Method for Quantitative Determination of Volatile Compounds in Alcohol Products

    CERN Document Server

    Charepitsa, S V; Zadreyko, Y V; Sytova, S N

    2016-01-01

    Results of the estimation of the precision for determination volatile compounds in alcohol-containing products by gas chromatography: acetaldehyde, methyl acetate, ethyl acetate, methanol, isopropyl alcohol, propyl alcohol, isobutyl alcohol, butyl alcohol, isoamyl alcohol are presented. To determine the accuracy, measurements were planned in accordance with ISO 5725 and held at the gas chromatograph Crystal-5000. Standard deviation of repeatability, intermediate precision and their limits are derived from obtained experimental data. The uncertainty of the measurements was calculated on the base of an "empirical" method. The obtained values of accuracy indicate that the developed method allows measurement uncertainty extended from 2 to 20% depending on the analyzed compound and measured concentration.

  19. Spectrophotometric Quantitative Estimation and Validation of Nimesulide and Drotaverine Hydrochloride in Tablet Dosage form

    OpenAIRE

    Prasad R. K.; Sharma R

    2010-01-01

    Three simple, sensitive and accurate UV spectrophotometric methods, I; first order derivative spectrophotometric, II; area under curve and III; multi-component method, has been developed for the estimation of drotaverine hydrochloride and nimesulide in tablets dosage form. Beers’ law was obeyed in the concentration range 5-35 µgml-1 and 10-50 µgml-1 for drotaverine (λmax = 230.5 nm) and nimesulide (λmax = 331.5 nm) respectively in methanol. All the three methods allowed rapid analysis of bina...

  20. Quantitative assessment of target dependence of pion fluctuation in hadronic interactions – estimation through erraticity

    Indian Academy of Sciences (India)

    Dipak Ghosh; Argha Deb; Mitali Mondal; Arindam Mondal; Sitram Pal

    2012-12-01

    Event-to-event fluctuation pattern of pions produced by proton and pion beams is studied in terms of the newly defined erraticity measures $ (p, q)$, $_{q}^{'}$ and $_{q}^{'}$ proposed by Cao and Hwa. The analysis reveals the erratic behaviour of the produced pions signifying the chaotic multiparticle production in high-energy hadron–nucleus interactions (- –AgBr interactions at 350 GeV/c and –AgBr interactions at 400 GeV/c). However, the chaoticity does not depend on whether the projectile is proton or pion. The results are compared with the results of the VENUS-generated data for the above interactions which suggests that VENUS event generator is unable to reproduce the event-to-event fluctuations of spatial patterns of final states. A comparative study of –AgBr interactions and - collisions at 400 GeV/c from NA27, with the help of a quantitative parameter for the assessment of pion fluctuation, indicates conclusively that particle production process is more chaotic for hadron–nucleus interactions than for hadron–hadron interactions.

  1. Noninvasive and quantitative intracranial pressure estimation using ultrasonographic measurement of optic nerve sheath diameter

    Science.gov (United States)

    Wang, Li-juan; Yao, Yan; Feng, Liang-shu; Wang, Yu-zhi; Zheng, Nan-nan; Feng, Jia-chun; Xing, Ying-qi

    2017-01-01

    We aimed to quantitatively assess intracranial pressure (ICP) using optic nerve sheath diameter (ONSD) measurements. We recruited 316 neurology patients in whom ultrasonographic ONSD was measured before lumbar puncture. They were randomly divided into a modeling and a test group at a ratio of 7:3. In the modeling group, we conducted univariate and multivariate analyses to assess associations between ICP and ONSD, age, sex, BMI, mean arterial blood pressure, diastolic blood pressure. We derived the mathematical function “Xing & Wang” from the modelling group to predict ICP and evaluated the function in the test group. In the modeling group, ICP was strongly correlated with ONSD (r = 0.758, p Watson value = 1.94). In the test group, a significant correlation was found between the observed and predicted ICP (r = 0.76, p < 0.001). Bland-Altman analysis yielded a mean difference between measurements of −0.07 ± 41.55 mmH2O. The intraclass correlation coefficient and its 95%CIs for noninvasive ICP assessments using our prediction model was 0.86 (0.79–0.90). Ultrasonographic ONSD measurements provide a potential noninvasive method to quantify ICP that can be conducted at the bedside. PMID:28169341

  2. Quantitative Estimation of Temperature Variations in Plantar Angiosomes: A Study Case for Diabetic Foot

    Directory of Open Access Journals (Sweden)

    H. Peregrina-Barreto

    2014-01-01

    Full Text Available Thermography is a useful tool since it provides information that may help in the diagnostic of several diseases in a noninvasive and fast way. Particularly, thermography has been applied in the study of the diabetic foot. However, most of these studies report only qualitative information making it difficult to measure significant parameters such as temperature variations. These variations are important in the analysis of the diabetic foot since they could bring knowledge, for instance, regarding ulceration risks. The early detection of ulceration risks is considered an important research topic in the medicine field, as its objective is to avoid major complications that might lead to a limb amputation. The absence of symptoms in the early phase of the ulceration is conceived as the main disadvantage to provide an opportune diagnostic in subjects with neuropathy. Since the relation between temperature and ulceration risks is well established in the literature, a methodology that obtains quantitative temperature differences in the plantar area of the diabetic foot to detect ulceration risks is proposed in this work. Such methodology is based on the angiosome concept and image processing.

  3. IMPROVED RP-HPLC METHOD FOR QUANTITATIVE ESTIMATION OF STEVIOSIDE IN STEVIA REBAUDIANA BERTONI BURM

    Directory of Open Access Journals (Sweden)

    Shankar Katekhaye

    2011-01-01

    Full Text Available An RP-HPLC method with UV array detection was established for the determination of stevioside, an extract of herbal S. rebaudiana plant. The stevioside was separated using isocratic solvent system consisting of methanol and 0.1% orthophosphoric acid (v/v in water (70:30 at flow rate of 1.0 ml/min and the detection wavelength of 219 nm. The method was validated for linearity, precision, accuracy, limit of detection (LOD, and limit of quantitation (LOQ. The linearity of the proposed method was obtained in the range of 5.0-75 μg/ml with regression coefficient of 0.9999. Intraday and interday precision studies showed the relative standard deviation less than 2.5%. The accuracy of the proposed method was determined by a recovery study conducted at 3 different levels. The average recovery was 97-99%. The LOD and LOQ were 0.02 and 0.05 µg/ml, respectively. The content of stevioside obtained in the dried leaves powder was within the ranges of 6.83 – 7.91% and 1.7 – 2.9 % w/w, respectively. The proposed method is simple, sensitive, yet reproducible. It is therefore suitable for routine analysis of stevioside in S. rebaudiana Bertoni.

  4. Raman spectroscopy of human skin: looking for a quantitative algorithm to reliably estimate human age

    Science.gov (United States)

    Pezzotti, Giuseppe; Boffelli, Marco; Miyamori, Daisuke; Uemura, Takeshi; Marunaka, Yoshinori; Zhu, Wenliang; Ikegaya, Hiroshi

    2015-06-01

    The possibility of examining soft tissues by Raman spectroscopy is challenged in an attempt to probe human age for the changes in biochemical composition of skin that accompany aging. We present a proof-of-concept report for explicating the biophysical links between vibrational characteristics and the specific compositional and chemical changes associated with aging. The actual existence of such links is then phenomenologically proved. In an attempt to foster the basics for a quantitative use of Raman spectroscopy in assessing aging from human skin samples, a precise spectral deconvolution is performed as a function of donors' ages on five cadaveric samples, which emphasizes the physical significance and the morphological modifications of the Raman bands. The outputs suggest the presence of spectral markers for age identification from skin samples. Some of them appeared as authentic "biological clocks" for the apparent exactness with which they are related to age. Our spectroscopic approach yields clear compositional information of protein folding and crystallization of lipid structures, which can lead to a precise identification of age from infants to adults. Once statistically validated, these parameters might be used to link vibrational aspects at the molecular scale for practical forensic purposes.

  5. Quantitative estimation of pulegone in Mentha longifolia growing in Saudi Arabia. Is it safe to use?

    Science.gov (United States)

    Alam, Prawez; Saleh, Mahmoud Fayez; Abdel-Kader, Maged Saad

    2016-03-01

    Our TLC study of the volatile oil isolated from Mentha longifolia showed a major UV active spot with higher Rf value than menthol. Based on the fact that the components of the oil from same plant differ quantitatively due to environmental conditions, the major spot was isolated using different chromatographic techniques and identified by spectroscopic means as pulegone. The presence of pulegone in M. longifolia, a plant widely used in Saudi Arabia, raised a hot debate due to its known toxicity. The Scientific Committee on Food, Health & Consumer Protection Directorate General, European Commission set a limit for the presence of pulegone in foodstuffs and beverages. In this paper we attempted to determine the exact amount of pulegone in different extracts, volatile oil as well as tea flavoured with M. longifolia (Habak) by densitometric HPTLC validated methods using normal phase (Method I) and reverse phase (Method II) TLC plates. The study indicated that the style of use of Habak in Saudi Arabia resulted in much less amount of pulegone than the allowed limit.

  6. Tree Root System Characterization and Volume Estimation by Terrestrial Laser Scanning and Quantitative Structure Modeling

    Directory of Open Access Journals (Sweden)

    Aaron Smith

    2014-12-01

    Full Text Available The accurate characterization of three-dimensional (3D root architecture, volume, and biomass is important for a wide variety of applications in forest ecology and to better understand tree and soil stability. Technological advancements have led to increasingly more digitized and automated procedures, which have been used to more accurately and quickly describe the 3D structure of root systems. Terrestrial laser scanners (TLS have successfully been used to describe aboveground structures of individual trees and stand structure, but have only recently been applied to the 3D characterization of whole root systems. In this study, 13 recently harvested Norway spruce root systems were mechanically pulled from the soil, cleaned, and their volumes were measured by displacement. The root systems were suspended, scanned with TLS from three different angles, and the root surfaces from the co-registered point clouds were modeled with the 3D Quantitative Structure Model to determine root architecture and volume. The modeling procedure facilitated the rapid derivation of root volume, diameters, break point diameters, linear root length, cumulative percentages, and root fraction counts. The modeled root systems underestimated root system volume by 4.4%. The modeling procedure is widely applicable and easily adapted to derive other important topological and volumetric root variables.

  7. Quantitative estimation of temperature variations in plantar angiosomes: a study case for diabetic foot.

    Science.gov (United States)

    Peregrina-Barreto, H; Morales-Hernandez, L A; Rangel-Magdaleno, J J; Avina-Cervantes, J G; Ramirez-Cortes, J M; Morales-Caporal, R

    2014-01-01

    Thermography is a useful tool since it provides information that may help in the diagnostic of several diseases in a noninvasive and fast way. Particularly, thermography has been applied in the study of the diabetic foot. However, most of these studies report only qualitative information making it difficult to measure significant parameters such as temperature variations. These variations are important in the analysis of the diabetic foot since they could bring knowledge, for instance, regarding ulceration risks. The early detection of ulceration risks is considered an important research topic in the medicine field, as its objective is to avoid major complications that might lead to a limb amputation. The absence of symptoms in the early phase of the ulceration is conceived as the main disadvantage to provide an opportune diagnostic in subjects with neuropathy. Since the relation between temperature and ulceration risks is well established in the literature, a methodology that obtains quantitative temperature differences in the plantar area of the diabetic foot to detect ulceration risks is proposed in this work. Such methodology is based on the angiosome concept and image processing.

  8. CORAL: quantitative structure-activity relationship models for estimating toxicity of organic compounds in rats.

    Science.gov (United States)

    Toropova, A P; Toropov, A A; Benfenati, E; Gini, G; Leszczynska, D; Leszczynski, J

    2011-09-01

    For six random splits, one-variable models of rat toxicity (minus decimal logarithm of the 50% lethal dose [pLD50], oral exposure) have been calculated with CORAL software (http://www.insilico.eu/coral/). The total number of considered compounds is 689. New additional global attributes of the simplified molecular input line entry system (SMILES) have been examined for improvement of the optimal SMILES-based descriptors. These global SMILES attributes are representing the presence of some chemical elements and different kinds of chemical bonds (double, triple, and stereochemical). The "classic" scheme of building up quantitative structure-property/activity relationships and the balance of correlations (BC) with the ideal slopes were compared. For all six random splits, best prediction takes place if the aforementioned BC along with the global SMILES attributes are included in the modeling process. The average statistical characteristics for the external test set are the following: n = 119 ± 6.4, R(2) = 0.7371 ± 0.013, and root mean square error = 0.360 ± 0.037. Copyright © 2011 Wiley Periodicals, Inc.

  9. Integral quantification accuracy estimation for reporter ion-based quantitative proteomics (iQuARI).

    Science.gov (United States)

    Vaudel, Marc; Burkhart, Julia M; Radau, Sonja; Zahedi, René P; Martens, Lennart; Sickmann, Albert

    2012-10-05

    With the increasing popularity of comparative studies of complex proteomes, reporter ion-based quantification methods such as iTRAQ and TMT have become commonplace in biological studies. Their appeal derives from simple multiplexing and quantification of several samples at reasonable cost. This advantage yet comes with a known shortcoming: precursors of different species can interfere, thus reducing the quantification accuracy. Recently, two methods were brought to the community alleviating the amount of interference via novel experimental design. Before considering setting up a new workflow, tuning the system, optimizing identification and quantification rates, etc. one legitimately asks: is it really worth the effort, time and money? The question is actually not easy to answer since the interference is heavily sample and system dependent. Moreover, there was to date no method allowing the inline estimation of error rates for reporter quantification. We therefore introduce a method called iQuARI to compute false discovery rates for reporter ion based quantification experiments as easily as Target/Decoy FDR for identification. With it, the scientist can accurately estimate the amount of interference in his sample on his system and eventually consider removing shadows subsequently, a task for which reporter ion quantification might not be the solution of choice.

  10. Age estimation during the blow fly intra-puparial period: a qualitative and quantitative approach using micro-computed tomography.

    Science.gov (United States)

    Martín-Vega, Daniel; Simonsen, Thomas J; Wicklein, Martina; Hall, Martin J R

    2017-05-04

    Minimum post-mortem interval (minPMI) estimates often rely on the use of developmental data from blow flies (Diptera: Calliphoridae), which are generally the first colonisers of cadavers and, therefore, exemplar forensic indicators. Developmental data of the intra-puparial period are of particular importance, as it can account for more than half of the developmental duration of the blow fly life cycle. During this period, the insect undergoes metamorphosis inside the opaque, barrel-shaped puparium, formed by the hardening and darkening of the third instar larval cuticle, which shows virtually no external changes until adult emergence. Regrettably, estimates based on the intra-puparial period are severely limited due to the lack of reliable, non-destructive ageing methods and are frequently based solely on qualitative developmental markers. In this study, we use non-destructive micro-computed tomography (micro-CT) for (i) performing qualitative and quantitative analyses of the morphological changes taking place during the intra-puparial period of two forensically relevant blow fly species, Calliphora vicina and Lucilia sericata, and (ii) developing a novel and reliable method for estimating insect age in forensic practice. We show that micro-CT provides age-diagnostic qualitative characters for most 10% time intervals of the total intra-puparial period, which can be used over a range of temperatures and with a resolution comparable to more invasive and time-consuming traditional imaging techniques. Moreover, micro-CT can be used to yield a quantitative measure of the development of selected organ systems to be used in combination with qualitative markers. Our results confirm micro-CT as an emerging, powerful tool in medico-legal investigations.

  11. Quantitative estimation of lithofacies from seismic data in a tertiary turbidite system in the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Joerstad, A.K.; Avseth, P.Aa; Mukerji, T.; Mavko, G.; Granli, J.R.

    1998-12-31

    Deep water clastic systems and associated turbidite reservoirs are often characterized by very complex sand distributions and reservoir description based on conventional seismic and well-log stratigraphic analysis may be very uncertain in these depositional environments. There is shown that reservoirs in turbidite systems have been produced very inefficiently in conventional development. More than 70% of the mobile oil is commonly left behind, because of the heterogeneous nature of these reservoirs. In this study there is examined a turbidite system in the North Sea with five available wells and a 3-D seismic near and far offset stack to establish most likely estimates of facies and pore fluid within the cube. 5 figs.

  12. Estimates of genetic variability and association studies in quantitative plant traits of Eruca spp. landraces

    Directory of Open Access Journals (Sweden)

    Bozokalfa Kadri Mehmet

    2010-01-01

    Full Text Available Despite the increasing of economical importance of rocket plant limited information is available on genetic variability for the agronomic traits among Eruca spp. Hence, heritability and association studies of plant properties are necessities for a successful further rocket breeding programme. The objective of this study was to examine phenotypic and genotypic variability, broad sense heritability, genetic advance, genotypic and phenotypic correlation and mean for agronomic traits of rocket plant. The magnitude of phenotypic coefficient of variation values for all the traits were higher than the corresponding values and broad sense heritability estimates exceeded 65% for all traits. Phenotypic coefficients of variability (PCV ranged from 7.60 to 34.34% and genotypic coefficients of variability (GCV ranged between 5.58% for petiole thickness and 34.30% for plant weight. The results stated that plant weight, siliqua width, seed per siliqua and seed weight could be useful character for improved Eruca spp. breeding programme.

  13. Teratogenic potency of valproate analogues evaluated by quantitative estimation of cellular morphology in vitro.

    Science.gov (United States)

    Berezin, V; Kawa, A; Bojic, U; Foley, A; Nau, H; Regan, C; Edvardsen, K; Bock, E

    1996-10-01

    To develop a simple prescreening system for teratogenicity testing, a novel in vitro assay was established using computer assisted microscopy allowing automatic delineation of contours of stained cells and thereby quantitative determination of cellular morphology. The effects of valproic acid (VPA) and analogues with high as well as low teratogenic activities-(as previously determined in vivo)-were used as probes for study of the discrimination power of the in vitro model. VPA, a teratogenic analogue (+/-)-4-en-VPA, and a non-teratogenic analogue (E)-2-en-VPA, as well as the purified (S)- and (R)-enantiomers of 4-yn-VPA (teratogenic and non-teratogenic, respectively), were tested for their effects on cellular morphology of cloned mouse fibroblastoid L-cell lines, neuroblastoma N2a cells, and rat glioma BT4Cn cells, and were found to induce varying increases in cellular area: Furthermore, it was demonstrated that under the chosen conditions the increase in area correlated statistically significantly with the teratogenic potency of the employed compounds. Setting the cellular area of mouse L-cells to 100% under control conditions, the most pronounced effect was observed for (S)-4-yn-VPA (211%, P = < 0.001) followed by VPA (186%, P < 0.001), 4-en-VPA (169%, P < 0.001) and non-teratogenic 2-en-VPA (137%, P < 0.005) and (R)-4-yn-VPA (105%). This effect was independent of the choice of substrata, since it was observed on L-cells grown on plastic, fibronectin, laminin and Matrigel. However, when VPA-treated cells were exposed to an arginyl-glycyl-aspartate (RGD)-containing peptide to test whether VPA treatment was able to modulate RGD-dependent integrin interactions with components of the extracellular matrix, hardly any effect could be observed, whereas control cells readily detached from the substratum, indicating a changed substrate adhesion of the VPA-treated cells. The data thus indicate that measurement of cellular area may serve as a simple in vitro test in the

  14. Quantitative Estimation of the Velocity of Urbanization in China Using Nighttime Luminosity Data

    Directory of Open Access Journals (Sweden)

    Ting Ma

    2016-01-01

    Full Text Available Rapid urbanization with sizeable enhancements of urban population and built-up land in China creates challenging planning and management issues due to the complexity of both the urban development and the socioeconomic drivers of environmental change. Improved understanding of spatio-temporal characteristics of urbanization processes are increasingly important for investigating urban expansion and environmental responses to corresponding socioeconomic and landscape dynamics. In this study, we present an artificial luminosity-derived index of the velocity of urbanization, defined as the ratio of temporal trend and spatial gradient of mean annual stable nighttime brightness, to estimate the pace of urbanization and consequent changes in land cover in China for the period of 2000–2010. Using the Defense Meteorological Satellite Program–derived time series of nighttime light data and corresponding satellite-based land cover maps, our results show that the geometric mean velocity of urban dispersal at the country level was 0.21 km·yr−1 across 88.58 × 103 km2 urbanizing areas, in which ~23% of areas originally made of natural and cultivated lands were converted to artificial surfaces between 2000 and 2010. The speed of urbanization varies among urban agglomerations and cities with different development stages and urban forms. Particularly, the Yangtze River Delta conurbation shows the fastest (0.39 km·yr−1 and most extensive (16.12 × 103 km2 urban growth in China over the 10-year period. Moreover, if the current velocity holds, our estimates suggest that an additional 13.29 × 103 km2 in land area will be converted to human-built features while high density socioeconomic activities across the current urbanizing regions and urbanized areas will greatly increase from 52.44 × 103 km2 in 2010 to 62.73 × 103 km2 in China’s mainland during the next several decades. Our findings may provide potential insights into the pace of urbanization in

  15. THE EVOLUTION OF SOLAR FLUX FROM 0.1 nm TO 160 {mu}m: QUANTITATIVE ESTIMATES FOR PLANETARY STUDIES

    Energy Technology Data Exchange (ETDEWEB)

    Claire, Mark W. [School of Environmental Sciences, University of East Anglia, Norwich, UK NR4 7TJ (United Kingdom); Sheets, John; Meadows, Victoria S. [Virtual Planetary Laboratory and Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Cohen, Martin [Radio Astronomy Laboratory, University of California, Berkeley, CA 94720-3411 (United States); Ribas, Ignasi [Institut de Ciencies de l' Espai (CSIC-IEEC), Facultat de Ciencies, Torre C5 parell, 2a pl, Campus UAB, E-08193 Bellaterra (Spain); Catling, David C., E-mail: M.Claire@uea.ac.uk [Virtual Planetary Laboratory and Department of Earth and Space Sciences, University of Washington, Box 351310, Seattle, WA 98195 (United States)

    2012-09-20

    Understanding changes in the solar flux over geologic time is vital for understanding the evolution of planetary atmospheres because it affects atmospheric escape and chemistry, as well as climate. We describe a numerical parameterization for wavelength-dependent changes to the non-attenuated solar flux appropriate for most times and places in the solar system. We combine data from the Sun and solar analogs to estimate enhanced UV and X-ray fluxes for the young Sun and use standard solar models to estimate changing visible and infrared fluxes. The parameterization, a series of multipliers relative to the modern top of the atmosphere flux at Earth, is valid from 0.1 nm through the infrared, and from 0.6 Gyr through 6.7 Gyr, and is extended from the solar zero-age main sequence to 8.0 Gyr subject to additional uncertainties. The parameterization is applied to a representative modern day flux, providing quantitative estimates of the wavelength dependence of solar flux for paleodates relevant to the evolution of atmospheres in the solar system (or around other G-type stars). We validate the code by Monte Carlo analysis of uncertainties in stellar age and flux, and with comparisons to the solar proxies {kappa}{sup 1} Cet and EK Dra. The model is applied to the computation of photolysis rates on the Archean Earth.

  16. Quantitative PCR-based genome size estimation of the astigmatid mites Sarcoptes scabiei, Psoroptes ovis and Dermatophagoides pteronyssinus

    Directory of Open Access Journals (Sweden)

    Mounsey Kate E

    2012-01-01

    Full Text Available Abstract Background The lack of genomic data available for mites limits our understanding of their biology. Evolving high-throughput sequencing technologies promise to deliver rapid advances in this area, however, estimates of genome size are initially required to ensure sufficient coverage. Methods Quantitative real-time PCR was used to estimate the genome sizes of the burrowing ectoparasitic mite Sarcoptes scabiei, the non-burrowing ectoparasitic mite Psoroptes ovis, and the free-living house dust mite Dermatophagoides pteronyssinus. Additionally, the chromosome number of S. scabiei was determined by chromosomal spreads of embryonic cells derived from single eggs. Results S. scabiei cells were shown to contain 17 or 18 small (S. scabiei and P. ovis were 96 (± 7 Mb and 86 (± 2 Mb respectively, among the smallest arthropod genomes reported to date. The D. pteronyssinus genome was estimated to be larger than its parasitic counterparts, at 151 Mb in female mites and 218 Mb in male mites. Conclusions This data provides a starting point for understanding the genetic organisation and evolution of these astigmatid mites, informing future sequencing projects. A comparitive genomic approach including these three closely related mites is likely to reveal key insights on mite biology, parasitic adaptations and immune evasion.

  17. A quantitative framework to estimate the relative importance of environment, spatial variation and patch connectivity in driving community composition.

    Science.gov (United States)

    Monteiro, Viviane F; Paiva, Paulo C; Peres-Neto, Pedro R

    2017-03-01

    Perhaps the most widely used quantitative approach in metacommunity ecology is the estimation of the importance of local environment vs. spatial structuring using the variation partitioning framework. Contrary to metapopulation models, however, current empirical studies of metacommunity structure using variation partitioning assume a space-for-dispersal substitution due to the lack of analytical frameworks that incorporate patch connectivity predictors of dispersal dynamics. Here, a method is presented that allows estimating the relative importance of environment, spatial variation and patch connectivity in driving community composition variation within metacommunities. The proposed approach is illustrated by a study designed to understand the factors driving the structure of a soft-bottom marine polychaete metacommunity. Using a standard variation partitioning scheme (i.e. where only environmental and spatial predictors are used), only about 13% of the variation in metacommunity structure was explained. With the connectivity set of predictors, the total amount of explained variation increased up to 51% of the variation. These results highlight the importance of considering predictors of patch connectivity rather than just spatial predictors. Given that information on connectivity can be estimated by commonly available data on species distributions for a number of taxa, the framework presented here can be readily applied to past studies as well, facilitating a more robust evaluation of the factors contributing to metacommunity structure.

  18. Validation of Body Condition Indices and Quantitative Magnetic Resonance in Estimating Body Composition in a Small Lizard

    Science.gov (United States)

    WARNER, DANIEL A.; JOHNSON, MARIA S.; NAGY, TIM R.

    2017-01-01

    Measurements of body condition are typically used to assess an individual’s quality, health, or energetic state. Most indices of body condition are based on linear relationships between body length and mass. Although these indices are simple to obtain, nonlethal, and useful indications of energetic state, their accuracy at predicting constituents of body condition (e.g., fat and lean mass) are often unknown. The objectives of this research were to (1) validate the accuracy of another simple and noninvasive method, quantitative magnetic resonance (QMR), at estimating body composition in a small-bodied lizard, Anolis sagrei, and (2) evaluate the accuracy of two indices of body condition (based on length–mass relationships) at predicting body fat, lean, and water mass. Comparisons of results from QMR scans to those from chemical carcass analysis reveal that QMR measures body fat, lean, and water mass with excellent accuracy in male and female lizards. With minor calibration from regression equations, QMR will be a reliable method of estimating body composition of A. sagrei. Body condition indices were positively related to absolute estimates of each constituent of body composition, but these relationships showed considerable variation around regression lines. In addition, condition indices did not predict fat, lean, or water mass when adjusted for body mass. Thus, our results emphasize the need for caution when interpreting body condition based upon linear measurements of animals. Overall, QMR provides an alternative noninvasive method for accurately measuring fat, lean, and water mass in these small-bodied animals. PMID:28035770

  19. Quantitative Estimation of Risks for Production Unit Based on OSHMS and Process Resilience

    Science.gov (United States)

    Nyambayar, D.; Koshijima, I.; Eguchi, H.

    2017-06-01

    Three principal elements in the production field of chemical/petrochemical industry are (i) Production Units, (ii) Production Plant Personnel and (iii) Production Support System (computer system introduced for improving productivity). Each principal element has production process resilience, i.e. a capability to restrain disruptive signals occurred in and out of the production field. In each principal element, risk assessment is indispensable for the production field. In a production facility, the occupational safety and health management system (Hereafter, referred to as OSHMS) has been introduced to reduce a risk of accidents and troubles that may occur during production. In OSHMS, a risk assessment is specified to reduce a potential risk in the production facility such as a factory, and PDCA activities are required for a continual improvement of safety production environments. However, there is no clear statement to adopt the OSHMS standard into the production field. This study introduces a metric to estimate the resilience of the production field by using the resilience generated by the production plant personnel and the result of the risk assessment in the production field. A method for evaluating how OSHMS functions are systematically installed in the production field is also discussed based on the resilience of the three principal elements.

  20. Quantitative estimation of sediment erosion and accretion processes in a micro-tidal coast

    Institute of Scientific and Technical Information of China (English)

    G.Udhaba DORA; V.Sanil KUMAR; P.VINAYARAJ; C.S.PHILIP; G.JOHNSON

    2014-01-01

    Spatio-temporal cross-shore profiles and textural characteristics are the key parameters for understanding dynamics of the inter-tidal sedimentary environment. This study describes short-term dynamics of the inter-tidal sedimentary environment at beaches along the micro-tidal coast. Further a correlation is estimated in cross-shore morphodynamics and textural characteristics of surface sediments. The sedimentary environment is examined for a complete annual cycle using monthly collected cross-shore profiles and sediment samples. The Devbag beach (northern side) and Ravindranath Tagore beach (southern side) at the Kali river mouth, Karwar, west coast of India are characterized from extremely gentle to average slope, and broadly composed of unimodal sands. The sedimentary environment is significantly composed of textures having fine to medium sand, well to moderately sorted, fine to coarse skewed, and platykurtic to leptokurtic in nature. During the annual cycle a reversal pattern is observed between the two adjacent beaches, where a slower rate of sediment accretion is observed at Devbag beach while Ravindranath Tagore beach exhibited erosion. The beach dynamics along with the propagation of south-west (SW) and south-west-west (SWW) waves towards the coast significantly exhibit a dominance of northward sediment transport with the existence of a northerly alongshore current. In addition, the study reveals that an eroded beach may not be significantly identified composed of coarse grains. The poor correlation in morpho-sedimentary characteristics reveals the prediction of grain characteristics based on beach profile and vice-versa is unrealistic.

  1. QUANTITATIVE ESTIMATION OF DNA ISOLATED FROM VARIOUS PARTS OF ANNONA SQUAMOSA

    Directory of Open Access Journals (Sweden)

    Soni Himesh

    2011-12-01

    Full Text Available Plants have been one of the important sources of medicines since the beginning of human civilization. There is a growing demand for plant based medicines, health products, pharmaceuticals, food supplements, cosmetics etc. Annona squamosa Linn is a multipurpose tree with edible fruits & is a source one of the medicinal & industrial products. Annona squamosa Linn is used as an antioxidant, antidiabetics, hepatoprotective, cytotoxicactivity, genetoxicity, antitumor activity, antilice agent. It is related to contain alkaloids, flavonoids, carbohydrates, fixed oils, tannins & phenolic. Genetic variation is essential for long term survival of species and it is a critical feature in conservation. For efficient conservation and management, the genetic composition of the species in different geographic locations needs to be assessed. Plants are attracting more attention among contemporary pharmacy scientists because some human diseases resulting from antibiotic resistance have gained worldwide concern. A number of methods are available and are being developed for the isolation of nucleic acids from plants. The different parts of Annona squamosa were studied for their nucleic acid content by using spectrophotometric analysis. In order to measure DNA content of the Leaves,friuts and stems of Annona squamosa, Spectrophotometry serves various advantages i.e. non-destructive and allows the sample to be recovered for further analysis or manipulation. Spectrophotometry uses the fact that there is a relationship between the absorption of ultraviolet light by DNA/RNA and its concentration in a sample. This article deals with modern approaches to develop a simple, efficient, reliable and cost-effective method for isolation, separation and estimation of total genomic DNA from various parts of the same species.

  2. Estimating the persistence of organic contaminants in indirect potable reuse systems using quantitative structure activity relationship (QSAR).

    Science.gov (United States)

    Lim, Seung Joo; Fox, Peter

    2012-09-01

    Predictions from the quantitative structure activity relationship (QSAR) model EPI Suite were modified to estimate the persistence of organic contaminants in indirect potable reuse systems. The modified prediction included the effects of sorption, biodegradation, and oxidation that may occur during sub-surface transport. A retardation factor was used to simulate the mobility of adsorbed compounds during sub-surface transport to a recovery well. A set of compounds with measured persistent properties during sub-surface transport was used to validate the results of the modifications to the predictions of EPI Suite. A comparison of the predicted values and measured values was done and the residual sum of the squares showed the importance of including oxidation and sorption. Sorption was the most important factor to include in predicting the fates of organic chemicals in the sub-surface environment.

  3. First quantitative bias estimates for tropospheric NO2 columns retrieved from SCIAMACHY, OMI, and GOME-2 using a common standard

    Directory of Open Access Journals (Sweden)

    X. Pan

    2012-06-01

    Full Text Available For the intercomparison of tropospheric nitrogen dioxide NO2 vertical column density (VCD data from three different satellite sensors (SCIAMACHY, OMI, and GOME-2, we use a common standard to quantitatively evaluate the biases for the respective data sets. As the standard, a regression analysis using a single set of collocated ground-based Multi-Axis Differential Optical Absorption Spectroscopy (MAX-DOAS observations at several sites in Japan and China in 2006–2011 is adopted. Examination of various spatial coincidence criteria indicates that the slope of the regression line can be influenced by the spatial distribution of NO2 over the area considered. While the slope varies systematically with the distance between the MAX-DOAS and satellite observation points around Tokyo in Japan, such a systematic dependence is not clearly seen and correlation coefficients are generally higher in comparisons at sites in China. On the basis of these results, we focus mainly on comparisons over China and best estimate the biases in SCIAMACHY, OMI, and GOME-2 data (TM4NO2A and DOMINO version 2 products against the MAX-DOAS observations to be −5±14 %, −10±14 %, and +1±14 %, respectively, which are all small and insignificant. We suggest that these small biases now allow analyses combining these satellite data for air quality studies that are more systematic and quantitative than previously possible.

  4. Quantitative assessment of the microbial risk of leafy greens from farm to consumption: preliminary framework, data, and risk estimates.

    Science.gov (United States)

    Danyluk, Michelle D; Schaffner, Donald W

    2011-05-01

    This project was undertaken to relate what is known about the behavior of Escherichia coli O157:H7 under laboratory conditions and integrate this information to what is known regarding the 2006 E. coli O157:H7 spinach outbreak in the context of a quantitative microbial risk assessment. The risk model explicitly assumes that all contamination arises from exposure in the field. Extracted data, models, and user inputs were entered into an Excel spreadsheet, and the modeling software @RISK was used to perform Monte Carlo simulations. The model predicts that cut leafy greens that are temperature abused will support the growth of E. coli O157:H7, and populations of the organism may increase by as much a 1 log CFU/day under optimal temperature conditions. When the risk model used a starting level of -1 log CFU/g, with 0.1% of incoming servings contaminated, the predicted numbers of cells per serving were within the range of best available estimates of pathogen levels during the outbreak. The model predicts that levels in the field of -1 log CFU/g and 0.1% prevalence could have resulted in an outbreak approximately the size of the 2006 E. coli O157:H7 outbreak. This quantitative microbial risk assessment model represents a preliminary framework that identifies available data and provides initial risk estimates for pathogenic E. coli in leafy greens. Data gaps include retail storage times, correlations between storage time and temperature, determining the importance of E. coli O157:H7 in leafy greens lag time models, and validation of the importance of cross-contamination during the washing process.

  5. Using Gypsum Hydration Water to Quantitatively Estimate the Intensity of the Terminal Classic Drought in the Maya Lowlands

    Science.gov (United States)

    Gázquez, F.; Evans, N. P.; Bauska, T. K.; Hodell, D. A.

    2016-12-01

    Paleoclimate evidence suggests that drought coincided with the collapse of the lowland Classic Maya civilization between 800 and 1000 AD. However, attempts to quantitatively determine the magnitude of hydrologic change have met with mixed results. Several periods of gypsum deposition have been documented in Lake Chichancanab (Yucatan Peninsula, Mexico) sediment cores and interpreted as representing times of drought. Here we analyzed the triple oxygen (17O/16O, 18O/16O) and hydrogen (2H/1H) isotope ratios of the gypsum hydration water to obtain the δ18O, δD, 17O-excess, and d-excess of the lake water during the drought periods. By comparing these results to measurements made on the modern lake, rain and ground waters, we are able to better constrain the hydrological changes that occurred in the lake basin during the Terminal Classic Drought (TCD). During the TCD, the δ18O and δD of the lake water increased compared with modern values, whereas the 17O-excess, and d-excess decreased. The isotopic composition of lake water (δ17O, δ18O and δD, and derived d-excess and 17O-excess) is sensitive to changes in atmospheric relative humidity and temperature. We modeled the isotopic data and found the observed changes can be explained by a 10% reduction in relative humidity compared to modern conditions. This reduction in relative humidity was accompanied by a significant increase in evaporation over precipitation. Furthermore, we show that the driest period occurred during the early phase of the TCD (ca 770-870 AD) when the Classic Maya declined. Previous studies based on stalagmite δ18O records suggested that the greatest drought period occurred in the Postclassic Period (1020 and 1100 AD) and post-dated the collapse. Our findings from Lake Chichancanab suggest that the changes to the hydrological budget during the TCD were greater than those during the early Postclassic Period.

  6. The Global Precipitation Climatology Project (GPCP) Combined Precipitation Dataset

    Science.gov (United States)

    Huffman, George J.; Adler, Robert F.; Arkin, Philip; Chang, Alfred; Ferraro, Ralph; Gruber, Arnold; Janowiak, John; McNab, Alan; Rudolf, Bruno; Schneider, Udo

    1997-01-01

    The Global Precipitation Climatology Project (GPCP) has released the GPCP Version 1 Combined Precipitation Data Set, a global, monthly precipitation dataset covering the period July 1987 through December 1995. The primary product in the dataset is a merged analysis incorporating precipitation estimates from low-orbit-satellite microwave data, geosynchronous-orbit -satellite infrared data, and rain gauge observations. The dataset also contains the individual input fields, a combination of the microwave and infrared satellite estimates, and error estimates for each field. The data are provided on 2.5 deg x 2.5 deg latitude-longitude global grids. Preliminary analyses show general agreement with prior studies of global precipitation and extends prior studies of El Nino-Southern Oscillation precipitation patterns. At the regional scale there are systematic differences with standard climatologies.

  7. A reliable and accurate portable device for rapid quantitative estimation of iodine content in different types of edible salt

    Directory of Open Access Journals (Sweden)

    Kapil Yadav

    2015-01-01

    Full Text Available Background: Continuous monitoring of salt iodization to ensure the success of the Universal Salt Iodization (USI program can be significantly strengthened by the use of a simple, safe, and rapid method of salt iodine estimation. This study assessed the validity of a new portable device, iCheck Iodine developed by the BioAnalyt GmbH to estimate the iodine content in salt. Materials and Methods: Validation of the device was conducted in the laboratory of the South Asia regional office of the International Council for Control of Iodine Deficiency Disorders (ICCIDD. The validity of the device was assessed using device specific indicators, comparison of iCheck Iodine device with the iodometric titration, and comparison between iodine estimation using 1 g and 10 g salt by iCheck Iodine using 116 salt samples procured from various small-, medium-, and large-scale salt processors across India. Results: The intra- and interassay imprecision for 10 parts per million (ppm, 30 ppm, and 50 ppm concentrations of iodized salt were 2.8%, 6.1%, and 3.1%, and 2.4%, 2.2%, and 2.1%, respectively. Interoperator imprecision was 6.2%, 6.3%, and 4.6% for the salt with iodine concentrations of 10 ppm, 30 ppm, and 50 ppm respectively. The correlation coefficient between measurements by the two methods was 0.934 and the correlation coefficient between measurements using 1 g of iodized salt and 10 g of iodized salt by the iCheck Iodine device was 0.983. Conclusions: The iCheck Iodine device is reliable and provides a valid method for the quantitative estimation of the iodine content of iodized salt fortified with potassium iodate in the field setting and in different types of salt.

  8. Quantitative estimates of reaction induced pressures: an example from the Norwegian Caledonides.

    Science.gov (United States)

    Vrijmoed, Johannes C.; Podladchikov, Yuri Y.

    2013-04-01

    Estimating the pressure and temperature of metamorphic rocks is fundamental to the understanding of geodynamics. It is therefore important to determine the mechanisms that were responsible for the pressure and temperature obtained from metamorphic rocks. Both pressure and temperature increase with depth in the Earth. Whereas temperature can vary due to local heat sources such as magmatic intrusions, percolation of hot fluids or deformation in shear zones, pressure in petrology is generally assumed to vary homogeneously with depth. However, fluid injection into veins, development of pressure shadows around porphyroblasts, fracturing and folding of rocks all involve variations in stress and therefore also in pressure (mean stress). Volume change during phase transformations or mineral reactions have the potential to build pressure if they proceed faster than the minerals or rocks can deform to accommodate the volume change. This mechanism of pressure generation does not require the rocks to be under differential stress, it may lead however to the development of local differential stress. The Western Gneiss Region (WGR) is a basement window within the Norwegian Caledonides. This area is well known for its occurrences of HP to UHP rocks, mainly found as eclogite boudins and lenses and more rarely within felsic gneisses. Present observations document a regional metamorphic gradient increasing towards the NW, and structures in the field can account for the exhumation of the (U)HP rocks from ~2.5 to 3 GPa. Locally however, mineralogical and geothermobarometric evidence points to metamorphic pressure up to 4 GPa. These locations present an example of local extreme pressure excursions from the regional and mostly coherent metamorphic gradient that are difficult to account for by present day structural field observations. Detailed structural, petrological, mineralogical, geochemical and geochronological study at the Svartberget UHP diamond locality have shown the injection

  9. Quantitative estimation of foot-flat and stance phase of gait using foot-worn inertial sensors.

    Science.gov (United States)

    Mariani, Benoit; Rouhani, Hossein; Crevoisier, Xavier; Aminian, Kamiar

    2013-02-01

    Time periods composing stance phase of gait can be clinically meaningful parameters to reveal differences between normal and pathological gait. This study aimed, first, to describe a novel method for detecting stance and inner-stance temporal events based on foot-worn inertial sensors; second, to extract and validate relevant metrics from those events; and third, to investigate their suitability as clinical outcome for gait evaluations. 42 subjects including healthy subjects and patients before and after surgical treatments for ankle osteoarthritis performed 50-m walking trials while wearing foot-worn inertial sensors and pressure insoles as a reference system. Several hypotheses were evaluated to detect heel-strike, toe-strike, heel-off, and toe-off based on kinematic features. Detected events were compared with the reference system on 3193 gait cycles and showed good accuracy and precision. Absolute and relative stance periods, namely loading response, foot-flat, and push-off were then estimated, validated, and compared statistically between populations. Besides significant differences observed in stance duration, the analysis revealed differing tendencies with notably a shorter foot-flat in healthy subjects. The result indicated which features in inertial sensors' signals should be preferred for detecting precisely and accurately temporal events against a reference standard. The system is suitable for clinical evaluations and provides temporal analysis of gait beyond the common swing/stance decomposition, through a quantitative estimation of inner-stance phases such as foot-flat.

  10. An appraisal of precipitation distribution in the high-altitude catchments of the Indus basin.

    Science.gov (United States)

    Dahri, Zakir Hussain; Ludwig, Fulco; Moors, Eddy; Ahmad, Bashir; Khan, Asif; Kabat, Pavel

    2016-04-01

    Scarcity of in-situ observations coupled with high orographic influences has prevented a comprehensive assessment of precipitation distribution in the high-altitude catchments of Indus basin. Available data are generally fragmented and scattered with different organizations and mostly cover the valleys. Here, we combine most of the available station data with the indirect precipitation estimates at the accumulation zones of major glaciers to analyse altitudinal dependency of precipitation in the high-altitude Indus basin. The available observations signified the importance of orography in each sub-hydrological basin but could not infer an accurate distribution of precipitation with altitude. We used Kriging with External Drift (KED) interpolation scheme with elevation as a predictor to appraise spatiotemporal distribution of mean monthly, seasonal and annual precipitation for the period of 1998-2012. The KED-based annual precipitation estimates are verified by the corresponding basin-wide observed specific runoffs, which show good agreement. In contrast to earlier studies, our estimates reveal substantially higher precipitation in most of the sub-basins indicating two distinct rainfall maxima; 1st along southern and lower most slopes of Chenab, Jhelum, Indus main and Swat basins, and 2nd around north-west corner of Shyok basin in the central Karakoram. The study demonstrated that the selected gridded precipitation products covering this region are prone to significant errors. In terms of quantitative estimates, ERA-Interim is relatively close to the observations followed by WFDEI and TRMM, while APHRODITE gives highly underestimated precipitation estimates in the study area. Basin-wide seasonal and annual correction factors introduced for each gridded dataset can be useful for lumped hydrological modelling studies, while the estimated precipitation distribution can serve as a basis for bias correction of any gridded precipitation products for the study area

  11. Research project in support of the GPCC: error estimation and development of a new method for the combination of conventionally observed and other precipitation data (especially satellite data) for operational analysis for global precipitation. Final report; Begleitendes FE-Vorhaben zum WZN: Fehlerschaetzung und Implementierung einer Methode zur optimalen Verknuepfung konventioneller und anderer Niederschlagsdaten (insbesondere Satellitendaten) zu operationellen globalen Niederschlagsanalysen. Schlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Rudolf, B.

    1998-04-01

    The Global Precipitation Climatology Centre (GPCC) was established in order to provide climate research with global gridded precipitation datasets derived from observational data. The centre is a German contribution to the World Climate Research Programme and is integrated in its major project Global Energy and Water Cycle Experiment (GEWEX). The goals of the research project described here were (1) improvement of the accuracy of the global precipitation analyses of GPCC as well as (2) quantification of the errors of the analysis products. Within the project following details have been carried out: - Development of the databank (procedures of station identification and data access). - Quality control for the input data and for the resulting products. - Improvement of the control methods for monthly precipitation data. - Development of an operational method for estimation of errors of gridded precipitation data which are area-averaged from conventional in-situ observations. - Intercomparison of gridded precipitation datasets resulting from different observation techniques and models. - Development of an operational method for the combination of gridded precipitation datasets resulting from different observation techniques (in colaboration with other contributors to the International Global Precipitation Climatology Project (GPCP)). - Investigation of the distribution of raingauge stations in relation to orography. - Preparation of digital global orography data for application with precipitation analysis. - Examination of methods and results of precipitation analysis based orography data. Products of GPCC and more information can be taken from Internet under http://www.dwd.de/research/gpcc. (orig.) [Deutsch] Das Weltzentrum fuer Niederschlagsklimatologie (WZN, bzw. Global Precipitation Climatology Centre, GPCC) wurde geschaffen, um der Klimaforschung monatliche globale Niederschlagsanalysen auf der Basis von Beobachtungsdaten zu liefern. Als ein deutscher Beitrag zum

  12. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wenchao Zhang

    2016-05-01

    Full Text Available The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS, for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  13. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    Science.gov (United States)

    Zhang, Wenchao; Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X

    2016-05-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  14. Tropical convective systems life cycle characteristics from geostationary satellite and precipitating estimates derived from TRMM and ground weather radar observations for the West African and South American regions

    Science.gov (United States)

    Fiolleau, T.; Roca, R.; Angelis, F. C.; Viltard, N.

    2012-12-01

    In the tropics most of the rainfall comes in the form of individual storm events embedded in the synoptic circulations (e.g., monsoons). Understanding the rainfall and its variability hence requires to document these highly contributing tropical convective systems (MCS). Our knowledge of the MCS life cycle, from a physical point of view mainly arises from individual observational campaigns heavily based on ground radar observations. While this large part of observations enabled the creation of conceptual models of MCS life cycle, it nevertheless does not reach any statistically significant integrated perspective yet. To overcome this limitation, a composite technique, that will serve as a Day-1 algorithm for the Megha-Tropiques mission, is considered in this study. this method is based on a collocation in space and time of the level-2 rainfall estimates (BRAIN) derived from the TMI radiometer onboard TRMM with the cloud systems identified by a new MCS tracking algorithm called TOOCAN and based on a 3-dimensional segmentation (image + time) of the geostationary IR imagery. To complete this study, a similar method is also developed collocating the cloud systems with the precipitating features derived from the ground weather radar which has been deployed during the CHUVA campaign over several Brazilian regions from 2010 up to now. A comparison of the MCSs life cycle is then performed for the 2010-2012 summer seasons over the West African, and South American regions. On the whole region of study, the results show that the temporal evolution of the cold cloud shield associated to MCSs describes a symmetry between the growth and the decay phases. It is also shown that the parameters of the conceptual model of MCSs are strongly correlated, reducing thereby the problem to a single degree of freedom. At the system scale, over both land and oceanic regions, rainfall is described by an increase at the beginning (the first third) of the life cycle and then smoothly decreases

  15. The Fine Spatial Distribution of Mean Precipitation and the Estimation of Total Precipitation in Heihe River Basin%黑河流域气候平均降水的精细化分布及总量计算

    Institute of Scientific and Technical Information of China (English)

    孙佳; 江灏; 王可丽; 雒新萍; 朱庆亮

    2011-01-01

    利用黑河流域气象观测站降水资料和DEM资料,分析了气候平均年和月降水量与地理地形参数的关系.结果显示,黑河流域气候平均降水量与测站的海拔、纬度、坡度显著相关,据此建立了降水量与地理地形参数的关系模型;拟合分析表明,年降水量拟合值与实测值的相关系数达0.94,二者在大部分地区分布特征基本一致,拟合值稍大;逐月降水量拟合相对误差在上游和中游都很小.基于降水量与地理地形参数的关系模型,利用高分辨率DEM资料,扩展得到了黑河流域上中游100m×100m精细化分布的气候平均年降水量和各月降水量.结果表明,精细化分布的降水量场能够表现出更多与地形和地势有关的细节,这是只利用气象测站资料的分析结果所不能反映的.在黑河流域气候平均降水量空间精细化分布基础上,按照黑河流域上中游面积5.08×104km2计算,其气候平均年降水总量约为150.6×108m3,降水主要集中在5-9月.%Combining the meteorological observation precipitation data and DEM of the Heihe River basin, the relationship between the climatic mean annual and monthly precipitation and the geographical and topographic indexes are analyzed in this paper. Results showed that the climate mean precipitation has significant correlation with altitude, latitude and gradient. A relationship model is established between the precipitation and geographical and topographic indexes. Fitting analysis shows that the regression equations of the annual precipitation and monthly precipitation from May to August all pass the 0. 001 credibility test and those for other monthly precipitation pass the 0.05 credibility test. Also, the correlation coefficient between the fitted the measured annual precipitation values is 0.94, both distribution characteristics are consistent in most regions but the fitted values are slightly larger. So the fitting results are reasonable on the whole

  16. Assessing the importance of spatio-temporal RCM resolution when estimating sub-daily extreme precipitation under current and future climate conditions

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Luchner, J.; Onof, C.

    2017-01-01

    The increase in extreme precipitation is likely to be one of the most significant impacts of climate change in cities due to increased pluvial flood risk. Hence, reliable information on changes in sub-daily extreme precipitation is needed for robust adaptation strategies. This study explores...

  17. A Quantitative Method for Comparing the Brightness of Antibody-dye Reagents and Estimating Antibodies Bound per Cell.

    Science.gov (United States)

    Kantor, Aaron B; Moore, Wayne A; Meehan, Stephen; Parks, David R

    2016-01-01

    We present a quantitative method for comparing the brightness of antibody-dye reagents and estimating antibodies bound per cell. The method is based on complementary binding of test and fill reagents to antibody capture microspheres. Several aliquots of antibody capture beads are stained with varying amounts of the test conjugate. The remaining binding sites on the beads are then filled with a second conjugate containing a different fluorophore. Finally, the fluorescence of the test conjugate compared to the fill conjugate is used to measure the relative brightness of the test conjugate. The fundamental assumption of the test-fill method is that if it takes X molecules of one test antibody to lower the fill signal by Y units, it will take the same X molecules of any other test antibody to give the same effect. We apply a quadratic fit to evaluate the test-fill signal relationship across different amounts of test reagent. If the fit is close to linear, we consider the test reagent to be suitable for quantitative evaluation of antibody binding. To calibrate the antibodies bound per bead, a PE conjugate with 1 PE molecule per antibody is used as a test reagent and the fluorescence scale is calibrated with Quantibrite PE beads. When the fluorescence per antibody molecule has been determined for a particular conjugate, that conjugate can be used for measurement of antibodies bound per cell. This provides comparisons of the brightness of different conjugates when conducted on an instrument whose statistical photoelectron (Spe) scales are known. © 2016 by John Wiley & Sons, Inc.

  18. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Haider, Masoom A. [Department of Medical Imaging, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5, Canada and Department of Medical Imaging, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Yeung, Ivan W. T. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Medical Physics, Stronach Regional Cancer Centre, Southlake Regional Health Centre, Newmarket, Ontario L3Y 2P9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2016-01-15

    quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.

  19. Assessment of small-scale variability of rainfall and multi-satellite precipitation estimates using measurements from a dense rain gauge network in Southeast India

    Science.gov (United States)

    Sunilkumar, K.; Narayana Rao, T.; Satheeshkumar, S.

    2016-05-01

    This paper describes the establishment of a dense rain gauge network and small-scale variability in rain events (both in space and time) over a complex hilly terrain in Southeast India. Three years of high-resolution gauge measurements are used to validate 3-hourly rainfall and sub-daily variations of four widely used multi-satellite precipitation estimates (MPEs). The network, established as part of the Megha-Tropiques validation program, consists of 36 rain gauges arranged in a near-square grid area of 50 km × 50 km with an intergauge distance of 6-12 km. Morphological features of rainfall in two principal rainy seasons (southwest monsoon, SWM, and northeast monsoon, NEM) show marked differences. The NEM rainfall exhibits significant spatial variability and most of the rainfall is associated with large-scale/long-lived systems (during wet spells), whereas the contribution from small-scale/short-lived systems is considerable during the SWM. Rain events with longer duration and copious rainfall are seen mostly in the western quadrants (a quadrant is 1/4 of the study region) in the SWM and northern quadrants in the NEM, indicating complex spatial variability within the study region. The diurnal cycle also exhibits large spatial and seasonal variability with larger diurnal amplitudes at all the gauge locations (except for 1) during the SWM and smaller and insignificant diurnal amplitudes at many gauge locations during the NEM. On average, the diurnal amplitudes are a factor of 2 larger in the SWM than in the NEM. The 24 h harmonic explains about 70 % of total variance in the SWM and only ˜ 30 % in the NEM. During the SWM, the rainfall peak is observed between 20:00 and 02:00 IST (Indian Standard Time) and is attributed to the propagating systems from the west coast during active monsoon spells. Correlograms with different temporal integrations of rainfall data (1, 3, 12, 24 h) show an increase in the spatial correlation with temporal integration, but the

  20. Tree-ring-based estimates of long-term seasonal precipitation in the Souris River Region of Saskatchewan, North Dakota and Manitoba

    Science.gov (United States)

    Ryberg, Karen R.; Vecchia, Skip V.; Akyüz, F. Adnan; Lin, Wei

    2016-01-01

    Historically unprecedented flooding occurred in the Souris River Basin of Saskatchewan, North Dakota and Manitoba in 2011, during a longer term period of wet conditions in the basin. In order to develop a model of future flows, there is a need to evaluate effects of past multidecadal climate variability and/or possible climate change on precipitation. In this study, tree-ring chronologies and historical precipitation data in a four-degree buffer around the Souris River Basin were analyzed to develop regression models that can be used for predicting long-term variations of precipitation. To focus on longer term variability, 12-year moving average precipitation was modeled in five subregions (determined through cluster analysis of measures of precipitation) of the study area over three seasons (November–February, March–June and July–October). The models used multiresolution decomposition (an additive decomposition based on powers of two using a discrete wavelet transform) of tree-ring chronologies from Canada and the US and seasonal 12-year moving average precipitation based on Adjusted and Homogenized Canadian Climate Data and US Historical Climatology Network data. Results show that precipitation varies on long-term (multidecadal) time scales of 16, 32 and 64 years. Past extended pluvial and drought events, which can vary greatly with season and subregion, were highlighted by the models. Results suggest that the recent wet period may be a part of natural variability on a very long time scale.

  1. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies.

    Science.gov (United States)

    Ali, E S M; Spencer, B; McEwen, M R; Rogers, D W O

    2015-02-21

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy-i.e. 100 keV (orthovoltage) to 25 MeV-using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ∼0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative 'envelope of uncertainty' of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).

  2. Quantitative microbial risk assessment combined with hydrodynamic modelling to estimate the public health risk associated with bathing after rainfall events.

    Science.gov (United States)

    Eregno, Fasil Ejigu; Tryland, Ingun; Tjomsland, Torulv; Myrmel, Mette; Robertson, Lucy; Heistad, Arve

    2016-04-01

    This study investigated the public health risk from exposure to infectious microorganisms at Sandvika recreational beaches, Norway and dose-response relationships by combining hydrodynamic modelling with Quantitative Microbial Risk Assessment (QMRA). Meteorological and hydrological data were collected to produce a calibrated hydrodynamic model using Escherichia coli as an indicator of faecal contamination. Based on average concentrations of reference pathogens (norovirus, Campylobacter, Salmonella, Giardia and Cryptosporidium) relative to E. coli in Norwegian sewage from previous studies, the hydrodynamic model was used for simulating the concentrations of pathogens at the local beaches during and after a heavy rainfall event, using three different decay rates. The simulated concentrations were used as input for QMRA and the public health risk was estimated as probability of infection from a single exposure of bathers during the three consecutive days after the rainfall event. The level of risk on the first day after the rainfall event was acceptable for the bacterial and parasitic reference pathogens, but high for the viral reference pathogen at all beaches, and severe at Kalvøya-small and Kalvøya-big beaches, supporting the advice of avoiding swimming in the day(s) after heavy rainfall. The study demonstrates the potential of combining discharge-based hydrodynamic modelling with QMRA in the context of bathing water as a tool to evaluate public health risk and support beach management decisions. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A quantitative estimate on the heat transfer in cylindrical fuel rods to account for flux depression inside fuel

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Mario A.B. da; Narain, Rajendra; Vasconcelos, Wagner E. de, E-mail: narain@ufpe.b, E-mail: wagner@ufpe.b [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Centro de Tecnologia e Geociencias. Dept. de Energia Nuclear

    2011-07-01

    In a nuclear reactor, the amount of power generation is limited by thermal rather than by nuclear considerations. The reactor core must be operated at a power level that the temperatures of the fuel and cladding anywhere in the core must not exceed safe limits so as to prevent from fuel element damages. Heat transfer from fuel pins can be calculated analytically by using a flat power density in the fuel pin. In actual practice, the neutron flux distribution inside fuel pins results in a smaller effective distance for the heat to be transported to the coolant. This inherent phenomenon gives rise to a heat transfer benefit in fuel pin temperatures. In this research, a quantitative estimate for transferring heat from cylindrical fuel rods is accomplished by considering a non-uniform neutron flux, which leads to a flux depression factor. This, in turn, shifts the temperature inside the fuel pin. A theoretical relationship combining the flux depression factor and a ratio of temperature gradients for uniform and non-uniform is derived, and a computational program, based on energy balance, is developed to validate the considered approximation. (author)

  4. Noise estimation in infrared image sequences: a tool for the quantitative evaluation of the effectiveness of registration algorithms.

    Science.gov (United States)

    Agostini, Valentina; Delsanto, Silvia; Knaflitz, Marco; Molinari, Filippo

    2008-07-01

    Dynamic infrared imaging has been proposed in literature as an adjunctive technique to mammography in breast cancer diagnosis. It is based on the acquisition of hundreds of consecutive thermal images with a frame rate ranging from 50 to 200 frames/s, followed by the harmonic analysis of temperature time series at each image pixel. However, the temperature fluctuation due to blood perfusion, which is the signal of interest, is small compared to the signal fluctuation due to subject movements. Hence, before extracting the time series describing temperature fluctuations, it is fundamental to realign the thermal images to attenuate motion artifacts. In this paper, we describe a method for the quantitative evaluation of any kind of feature-based registration algorithm on thermal image sequences, provided that an estimation of local velocities of reference points on the skin is available. As an example of evaluation of a registration algorithm, we report the evaluation of the SNR improvement obtained by applying a nonrigid piecewise linear algorithm.

  5. Avaliação de estimativas de campos de precipitação para modelagem hidrológica distribuída Assessment of estimated precipitation fields for distributed hydrologic modeling

    Directory of Open Access Journals (Sweden)

    Adriano Rolim da Paz

    2011-03-01

    Full Text Available É crescente a disponibilidade e utilização de campos de chuva estimados por sensoriamento remoto ou calculados por modelos de circulação da atmosfera, os quais são freqüentemente utilizados como entrada para modelos hidrológicos distribuídos. A distribuição espacial dos campos de chuva estimados é altamente relevante e deve ser avaliada frente aos campos de chuva observados. Este artigo propõe um método de comparação espaço-temporal entre campos de chuva observados e estimados baseado na comparação pixel a pixel e na construção de tabelas de contingência. Duas abordagens são utilizadas: (i a análise integrada no espaço gera índices de performance que retratam a qualidade do campo de chuva estimada em reproduzir a ocorrência de chuva observada ao longo do tempo; (ii a análise integrada no tempo produz mapas dos índices de performance que resumem a destreza das estimativas de ocorrência de chuva em cada pixel. Como exemplo de aplicação, é analisada a chuva estimada na climatologia do modelo global de circulação da atmosfera CPTEC/COLA sobre a bacia do Rio Grande. Utilizando-se cinco índices de performance, o método proposto permitiu identificar variações sazonais e padrões espaciais na performance das estimativas de chuva em relação a campos de chuva derivados de observações em pluviômetros.There is an increasing availability and application of precipitation fields estimated by remote sensing or calculated by atmospheric circulation models, which are frequently used as input for distributed hydrological models. The spatial distribution of the estimated precipitation fields is extremely important and must be verified against observed precipitation fields. This paper proposes a method for spatiotemporal comparison between observed and estimated precipitation fields based on a pixel by pixel comparison and on contingency tables. Two distinct approaches are carried out: (i the spatial integrated analysis

  6. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    Directory of Open Access Journals (Sweden)

    P. Jaiswal

    2011-06-01

    Full Text Available A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms.

    An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides, while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures.

    For landslide hazard assessment the following information was derived: (1 landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2 the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3 landslide susceptible zones, obtained using a logistic regression model, and (4 distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve. The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents.

    Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in

  7. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    Science.gov (United States)

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2011-06-01

    A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the

  8. NASA's Global Precipitation Measurement (GPM) Mission for Science and Society

    Science.gov (United States)

    Jackson, Gail

    2016-04-01

    Water is fundamental to life on Earth. Knowing where and how much rain and snow falls globally is vital to understanding how weather and climate impact both our environment and Earth's water and energy cycles, including effects on agriculture, fresh water availability, and responses to natural disasters. The Global Precipitation Measurement (GPM) Mission, launched February 27, 2014, is an international satellite mission to unify and advance precipitation measurements from a constellation of research and operational sensors to provide "next-generation" precipitation products. The joint NASA-JAXA GPM Core Observatory serves as the cornerstone and anchor to unite the constellation radiometers. The GPM Core Observatory carries a Ku/Ka-band Dual-frequency Precipitation Radar (DPR) and a multi-channel (10-183 GHz) GPM Microwave Radiometer (GMI). Furthermore, since light rain and falling snow account for a significant fraction of precipitation occurrence in middle and high latitudes, the GPM instruments extend the capabilities of the TRMM sensors to detect falling snow, measure light rain, and provide, for the first time, quantitative estimates of microphysical properties of precipitation particles. As a science mission with integrated application goals, GPM is designed to (1) advance precipitation measurement capability from space through combined use of active and passive microwave sensors, (2) advance the knowledge of the global water/energy cycle and freshwater availability through better description of the space-time variability of global precipitation, and (3) improve weather, climate, and hydrological prediction capabilities through more accurate and frequent measurements of instantaneous precipitation rates and time-integrated rainfall accumulation. Since launch, the instruments have been collecting outstanding precipitation data. New scientific insights resulting from GPM data, an overview of the GPM mission concept and science activities in the United States

  9. Quantitative testing of the methodology for genome size estimation in plants using flow cytometry: a case study of the Primulina genus

    OpenAIRE

    Wang, Jing; Liu, Juan; Kang, Ming

    2015-01-01

    Flow cytometry (FCM) is a commonly used method for estimating genome size in many organisms. The use of FCM in plants is influenced by endogenous fluorescence inhibitors and may cause an inaccurate estimation of genome size; thus, falsifying the relationship between genome size and phenotypic traits/ecological performance. Quantitative optimization of FCM methodology minimizes such errors, yet there are few studies detailing this methodology. We selected the genus Primulina, one of the most r...

  10. Quantitative testing of the methodology for genome size estimation in plants using flow cytometry: a case study of the Primulina genus

    OpenAIRE

    Jing eWang; Juan eLiu; Ming eKang

    2015-01-01

    Flow cytometry (FCM) is a commonly used method for estimating genome size in many organisms. The use of flow cytometry in plants is influenced by endogenous fluorescence inhibitors and may cause an inaccurate estimation of genome size; thus, falsifying the relationship between genome size and phenotypic traits/ecological performance. Quantitative optimization of FCM methodology minimizes such errors, yet there are few studies detailing this methodology. We selected the genus Primulina, one of...

  11. Estimation of coronary artery stenosis by low-dose adenosine stress real-time myocardial contrast echocardiography: a quantitative study

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xiao; ZHI Guang; XU Yong; WANG Jing; YAN Guo-hui

    2012-01-01

    Background Coronary microcirculation reserve is an important field in the research of coronary artery disease,but it is difficult to identify clinically.Currently it is widely accepted that myocardial contrast echocardiography (MCE) is a safe,inexpensive method and has comparatively high image resolution.The present study used quantitative low-dose adenosine stress real-time (RT)-MCE to estimate myocardial perfusion and the coronary stenosis.Methods Forty-nine left ventricular (LV) segments from 14 unselected patients were divided into three groups according to the coronary angiography or CT angiography results:group 1 (n=20,41%) without significant stenosis (<70%),group 2 (n=12,24%)with successful percutaneous coronary intervention (PCI),and group 3 (n=17,35%)with significant stenosis (>70%).RT-MCE was performed in these patients with low-dose adenosine stress and continuous infusion of Sonovue.The replenishing curves were drawn according to the contrast density measured at the end-diastolic frame of every cardiac circle by ACQ software.Results Forty-nine LV segments with satisfactory image quality were picked for quantitative contrast echo analysis.The replenishing curves were analyzed at baseline and after stress.Perfusion of group 3 did not decrease significantly at baseline,and showed no improvement during adenosine stress and was significantly different from groups 1 and 2 (P <0.05).The A·β and β increased more significantly in group 1 than in groups 2 and 3 (P <0.05).In a receiver operating characteristic (ROC) curve analysis,A·β under adenosine stress <1.74 dB/s had a sensitivity and specificity of 71% for diagnosis of coronary artery stenosis,reduced adenosine-induced rise (percentage of A·β <81%) had a sensitivity and specificity of 83% and 79% for the diagnosis of low-reserve,and β <54% had a sensitivity of 86% and specificity of 79%.Conclusions Rest perfusion of severely stenosed arteries may be normal

  12. A first calibration of nonmarine ostracod species for the quantitative estimation of Pleistocene climate change in southern Africa

    Science.gov (United States)

    Horne, D. J.; Martens, K.

    2009-04-01

    Although qualitative statements have been made about general climatic conditions in southern Africa during the Pleistocene, there are few quantifiable palaeoclimatic data based on field evidence, especially regarding whether the area was wetter or drier during the Last Glacial Maximum. Such information is critical in validating models of climate change, both in spatial and temporal dimensions. As an essential preliminary step towards palaeoclimate reconstructions using fossil ostracods from cored lake sediment sequences, we have calibrated a training set of living ostracod species' distributions against a modern climate dataset and other available environmental data. The modern ostracod dataset is based on the collections in the Royal Belgian Institute of Natural Sciences in Brussels, which constitutes the most diverse and comprehensive collection of southern African nonmarine ostracods available anywhere in the world. To date, c. 150 nominal species have been described from southern Africa (Martens, 2001) out of c. 450 species in the total Afrotropical area (Martens et al., 2008). Here we discuss the potential value and limitations of the training set for the estimation of past climatic parameters including air temperature (July and January means, maxima and minima, Mean Annual Air Temperature), precipitation, water conductivity and pH. The next step will be to apply the Mutual Ostracod Temperature Range method (Horne, 2007; Horne & Mezquita, 2008) to the palaeoclimatic analysis of fossil ostracod assemblages from sequences recording the Last Glacial Maximum in southern Africa. Ultimately this work will contribute to the development of a glacier-climate modelling project based on evidence of former niche glaciation of the Drakensberg Escarpment. Horne, D. J. 2007. A Mutual Temperature Range method for Quaternary palaeoclimatic analysis using European nonmarine Ostracoda. Quaternary Science Reviews, 26, 1398-1415. Horne, D. J. & Mezquita, F. 2008. Palaeoclimatic

  13. Quantitative in vivo CT arthrography of the human osteoarthritic knee to estimate cartilage sulphated glycosaminoglycan content : correlation with ex-vivo reference standards

    NARCIS (Netherlands)

    van Tiel, J; Siebelt, M; Reijman, M; Bos, P.K.; Waarsing, J H; Zuurmond, A-M; Nasserinejad, K; van Osch, G J V M; Verhaar, J A N; Krestin, G P; Weinans, H; Oei, E H G

    OBJECTIVE: Recently, computed tomography arthrography (CTa) was introduced as quantitative imaging biomarker to estimate cartilage sulphated glycosaminoglycan (sGAG) content in human cadaveric knees. Our aim was to assess the correlation between in vivo CTa in human osteoarthritis (OA) knees and ex

  14. Quantitative in vivo CT arthrography of the human osteoarthritic knee to estimate cartilage sulphated glycosaminoglycan content: correlation with ex-vivo reference standards

    NARCIS (Netherlands)

    Tiel, J. van; Siebelt, M.; Reijman, M.; Bos, P.K.; Waarsing, J.H.; Zuurmond, A.M.; Nasserinejad, K.; Osch, G.J.V.M. van; Verhaar, J.A.N.; Krestin, G.P.; Weinans, H.; Oei, E.H.G.

    2016-01-01

    Objective. Recently, computed tomography arthrography (CTa) was introduced as quantitative imaging biomarker to estimate cartilage sulphated glycosaminoglycan (sGAG) content in human cadaveric knees. Our aim was to assess the correlation between in vivo CTa in human osteoarthritis (OA) knees and ex

  15. Quantitative measurement of speech sound distortions with the aid of minimum variance spectral estimation method for dentistry use.

    Science.gov (United States)

    Bereteu, L; Drăgănescu, G E; Stănescu, D; Sinescu, C

    2011-12-01

    In this paper, we search an adequate quantitative method based on minimum variance spectral analysis in order to reflect the dependence of the speech quality on the correct positioning of the dental prostheses. We also search some quantitative parameters, which reflect the correct position of dental prostheses in a sensitive manner.

  16. Bio-precipitation of uranium by two bacterial isolates recovered from extreme environments as estimated by potentiometric titration, TEM and X-ray absorption spectroscopic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Merroun, Mohamed L., E-mail: merroun@ugr.es [Institute of Radiochemistry, Helmholtz Centre Dresden-Rossendorf, Dresden (Germany); Departamento de Microbiologia, Universidad de Granada, Campus Fuentenueva s/n 18071, Granada (Spain); Nedelkova, Marta [Institute of Radiochemistry, Helmholtz Centre Dresden-Rossendorf, Dresden (Germany); Ojeda, Jesus J. [Cell-Mineral Interface Research Programme, Kroto Research Institute, University of Sheffield, Broad Lane, Sheffield S3 7HQ (United Kingdom); Experimental Techniques Centre, Brunel University, Uxbridge, Middlesex UB8 3PH (United Kingdom); Reitz, Thomas [Institute of Radiochemistry, Helmholtz Centre Dresden-Rossendorf, Dresden (Germany); Fernandez, Margarita Lopez; Arias, Jose M. [Departamento de Microbiologia, Universidad de Granada, Campus Fuentenueva s/n 18071, Granada (Spain); Romero-Gonzalez, Maria [Cell-Mineral Interface Research Programme, Kroto Research Institute, University of Sheffield, Broad Lane, Sheffield S3 7HQ (United Kingdom); Selenska-Pobell, Sonja [Institute of Radiochemistry, Helmholtz Centre Dresden-Rossendorf, Dresden (Germany)

    2011-12-15

    Highlights: Black-Right-Pointing-Pointer Precipitation of uranium as U phosphates by natural bacterial isolates. Black-Right-Pointing-Pointer The uranium biomineralization involves the activity of acidic phosphatase. Black-Right-Pointing-Pointer Uranium bioremediation could be achieved via the biomineralization of U(VI) in phosphate minerals. - Abstract: This work describes the mechanisms of uranium biomineralization at acidic conditions by Bacillus sphaericus JG-7B and Sphingomonas sp. S15-S1 both recovered from extreme environments. The U-bacterial interaction experiments were performed at low pH values (2.0-4.5) where the uranium aqueous speciation is dominated by highly mobile uranyl ions. X-ray absorption spectroscopy (XAS) showed that the cells of the studied strains precipitated uranium at pH 3.0 and 4.5 as a uranium phosphate mineral phase belonging to the meta-autunite group. Transmission electron microscopic (TEM) analyses showed strain-specific localization of the uranium precipitates. In the case of B. sphaericus JG-7B, the U(VI) precipitate was bound to the cell wall. Whereas for Sphingomonas sp. S15-S1, the U(VI) precipitates were observed both on the cell surface and intracellularly. The observed U(VI) biomineralization was associated with the activity of indigenous acid phosphatase detected at these pH values in the absence of an organic phosphate substrate. The biomineralization of uranium was not observed at pH 2.0, and U(VI) formed complexes with organophosphate ligands from the cells. This study increases the number of bacterial strains that have been demonstrated to precipitate uranium phosphates at acidic conditions via the activity of acid phosphatase.

  17. 多QTL定位的压缩估计方法%Shrinkage Estimation Method for Mapping Multiple Quantitative Trait Loci

    Institute of Scientific and Technical Information of China (English)

    章元明

    2006-01-01

    本文综述了多标记分析和多QTL定位的压缩估计方法.对于前者,Xu(Genetics,2003,163:789-801)首先提出了Bayesian压缩估计方法.其关键在于让每个效应有一个特定的方差参数,而该方差又服从一定的先验分布,以致能从资料中估计之.由此,能够同时估计大量分子标记基因座的遗传效应,即使大多数标记的效应是可忽略的.然而,对于上位性遗传模型,其运算时间还是过长.为此,笔者将上述思想嵌入极大似然法,提出了惩罚最大似然方法.模拟研究显示:该方法能处理变量个数大于样本容量10倍左右的线性遗传模型.对于后者,本文详细介绍了基于固定区间和可变区间的Bayesian压缩估计方法.固定区间方法可处理中等密度的分子标记资料;可变区间方法则可分析高密度分子标记资料,甚至是上位性遗传模型.对于上位性检测,已介绍的惩罚最大似然方法和可变区间Bayesian压缩估计方法可供利用.应当指出,压缩估计方法在今后的eQTL和QTN定位以及基因互作网络分析等研究中也是有应用价值的.%In this article, shrinkage estimation method for multiple-marker analysis and for mapping multiple quantitative trait loci (QTL) was reviewed. For multiple-marker analysis, Xu (Genetics, 2003, 163:789-801) developed a Bayesian shrinkage estimation (BSE) method. The key to the success of this method is to allow each marker effect have its own variance parameter, which in turn has its own prior distribution so that the variance can be estimated from the data. Under this hierarchical model, a large number of markers can be handled although most of them may have negligible effects. Under epistatic genetic model, however, the running time is very long. To overcome this problem, a novel method of incorporating the idea described above into maximum likelihood,known as penalized likelihood method, was proposed. A simulated study showed that this method can

  18. Skill assessment of precipitation nowcasting in Mediterranean Heavy Precipitation Events

    Science.gov (United States)

    Bech, Joan; Berenguer, Marc

    2013-04-01

    Very short-term precipitation forecasting (i.e nowcasting) systems may provide valuable support in the weather surveillance process as they allow to issue automated early warnings for heavy precipitation events (HPE) as reviewed recently by Pierce et al. (2012). The need for warnings is essential in densely populated regions of small catchments, such as those typically found in Mediterranean coastal areas, prone to flash-floods. Several HPEs that occurred in NE Spain are analyzed using a nowcasting system based on the extrapolation of rainfall fields observed with weather radar following a Lagrangian approach developed and tested successfully in previous studies (Berenguer et al. 2005, 2011). Radar-based nowcasts, with lead times up to 3 h, are verified here against quality-controlled weather radar quantitative precipitation estimates and also against a dense network of raingauges. The basic questions studied are the dependence of forecast quality with lead time and rainfall amounts in several high-impact HPEs such as the 7 September 2005 Llobregat Delta river tornado outbreak (Bech et al. 2007) or the 2 November 2008 supercell tornadic thunderstorms (Bech et al. 2011) - both cases had intense rainfall rates (30' amounts exceeding 38.2 and 12.3 mm respectively) and daily values above 100 mm. Verification scores indicated that forecasts of 30' precipitation amounts provided useful guidance for lead times up to 60' for moderate intensities (up to 1 mm in 30') and up to 2.5h for lower rates (above 0.1 mm). On the other hand correlations of radar estimates and forecasts exceeded Eulerian persistence of precipitation estimates for lead times of 1.5 h for moderate intensities (up to 0.8 mm/h). We complete the analysis with a discussion on the reliability of threshold to lead time dependence based on the event-to-event variability found. This work has been done in the framework of the ProFEWS project (CGL2010-15892). References Bech J, N Pineda, T Rigo, M Aran, J Amaro, M

  19. Towards a Quantitative Use of Satellite Remote Sensing in Crop Growth Models for Large Scale Agricultural Production Estimate (Invited)

    Science.gov (United States)

    Defourny, P.

    2013-12-01

    such the Green Area Index (GAI), fAPAR and fcover usually retrieved from MODIS, MERIS, SPOT-Vegetation described the quality of the green vegetation development. The GLOBAM (Belgium) and EU FP-7 MOCCCASIN projects (Russia) improved the standard products and were demonstrated over large scale. The GAI retrieved from MODIS time series using a purity index criterion depicted successfully the inter-annual variability. Furthermore, the quantitative assimilation of these GAI time series into a crop growth model improved the yield estimate over years. These results showed that the GAI assimilation works best at the district or provincial level. In the context of the GEO Ag., the Joint Experiment of Crop Assessment and Monitoring (JECAM) was designed to enable the global agricultural monitoring community to compare such methods and results over a variety of regional cropping systems. For a network of test sites around the world, satellite and field measurements are currently collected and will be made available for collaborative effort. This experiment should facilitate international standards for data products and reporting, eventually supporting the development of a global system of systems for agricultural crop assessment and monitoring.

  20. ESTIMATION OF PRECIPITABLE WATER VAPOR BASED ON INTERPOLATED PRESSURE DATA%基于插值气压的GPS反演大气可降水量研究

    Institute of Scientific and Technical Information of China (English)

    刘立龙; 姚朝龙; 熊思; 黄良珂

    2013-01-01

    The meteorological data are obtained by pressure interpolation for the estimation of GPS precipitable water vapor (PWV) due to the lack of meteorological parameters at GPS sites.The pressure interpolation formula based on segmented height difference is derived by analyzing the relationship between the interpolated pressure at the GPS sites and the pressure at the nearby radiosonde (RS) stations,and the relationship between the interpolated pressure of the GPS sites and the height difference between the IGS stations and the nearby radiosonde stations using the standard atmosphere (SA) model.The new pressure interpolation formula has the same accuracy as the SA model,and the former is simple when the height difference is less than 100 m.GPS PWV is derived from the new pressure interpolation formula,Saastamoinen zenith hydrostatic delay (ZHD) model and the local weighted mean temperature of the atmosphere (Tm).By comparing with RS PWV,the results show that the new pressure interpolation model can be used to calculate GPS PWV with no meteorological data at GPS stations which the RMS error between GPS PWV obtained from the pressure interpolation formula based on segmented height difference and RS PWV is 1-3 mm.%为解决GPS反演大气可降水量(PWV)所需的气象参数,通过标准大气(SA)模型分析插值得到了GPS站气压与相邻探空站气压、GPS站与探空站之间高差的关系,及基于分段高差的气压插值公式.该公式与标准大气模型精度相当,且在高差小于100 m时,其计算更为简单.利用新的气压插值公式、Saastamoinen干延迟模型与建立的局地加权平均温度模型,将四个IGS站(BJFS、KUNM、LHAZ和TWTF)提供的对流层天顶延迟转化得到大气可降水量(GPS PWV),与探空大气可降水量(RS PWV)进行对比,结果表明在不同高差条件下基于分段高差的气压插值模型计算得到的GPS PWV与RS PWV差值的均方根误差为1~3mm,说明该气压插值模型可应用于无气象数据的GPS反演PWV.

  1. Next-Generation Satellite Precipitation Products for Understanding Global and Regional Water Variability

    Science.gov (United States)

    Hou, Arthur Y.

    2011-01-01

    A major challenge in understanding the space-time variability of continental water fluxes is the lack of accurate precipitation estimates over complex terrains. While satellite precipitation observations can be used to complement ground-based data to obtain improved estimates, space-based and ground-based estimates come with their own sets of uncertainties, which must be understood and characterized. Quantitative estimation of uncertainties in these products also provides a necessary foundation for merging satellite and ground-based precipitation measurements within a rigorous statistical framework. Global Precipitation Measurement (GPM) is an international satellite mission that will provide next-generation global precipitation data products for research and applications. It consists of a constellation of microwave sensors provided by NASA, JAXA, CNES, ISRO, EUMETSAT, DOD, NOAA, NPP, and JPSS. At the heart of the mission is the GPM Core Observatory provided by NASA and JAXA to be launched in 2013. The GPM Core, which will carry the first space-borne dual-frequency radar and a state-of-the-art multi-frequency radiometer, is designed to set new reference standards for precipitation measurements from space, which can then be used to unify and refine precipitation retrievals from all constellation sensors. The next-generation constellation-based satellite precipitation estimates will be characterized by intercalibrated radiometric measurements and physical-based retrievals using a common observation-derived hydrometeor database. For pre-launch algorithm development and post-launch product evaluation, NASA supports an extensive ground validation (GV) program in cooperation with domestic and international partners to improve (1) physics of remote-sensing algorithms through a series of focused field campaigns, (2) characterization of uncertainties in satellite and ground-based precipitation products over selected GV testbeds, and (3) modeling of atmospheric processes and

  2. Mean Annual Precipitation in West-Central Nevada using the Precipitation-Zone Method

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set contains 1971-2000 mean annual precipitation estimates for west-central Nevada. This is a raster data set developed using the precipitation-zone...

  3. Estimation of the measurement uncertainty in quantitative determination of ketamine and norketamine in urine using a one-point calibration method.

    Science.gov (United States)

    Ma, Yi-Chun; Wang, Che-Wei; Hung, Sih-Hua; Chang, Yan-Zin; Liu, Chia-Reiy; Her, Guor-Rong

    2012-09-01

    An approach was proposed for the estimation of measurement uncertainty for analytical methods based on one-point calibration. The proposed approach is similar to the popular multiple-point calibration approach. However, the standard deviation of calibration was estimated externally. The approach was applied to the estimation of measurement uncertainty for the quantitative determination of ketamine (K) and norketamine (NK) at a 100 ng/mL threshold concentration in urine. In addition to uncertainty due to calibration, sample analysis was the other major source of uncertainty. To include the variation due to matrix effect and temporal effect in sample analysis, different blank urines were spiked with K and NK and analyzed at equal time intervals within and between batches. The expanded uncertainties (k = 2) were estimated to be 10 and 8 ng/mL for K and NK, respectively.

  4. Application of (13)C ramp CPMAS NMR with phase-adjusted spinning sidebands (PASS) for the quantitative estimation of carbon functional groups in natural organic matter.

    Science.gov (United States)

    Ikeya, Kosuke; Watanabe, Akira

    2016-01-01

    The composition of carbon (C) functional groups in natural organic matter (NOM), such as dissolved organic matter, soil organic matter, and humic substances, is frequently estimated using solid-state (13)C NMR techniques. A problem associated with quantitative analysis using general cross polarization/magic angle spinning (CPMAS) spectra is the appearance of spinning side bands (SSBs) split from the original center peaks of sp (2) hybridized C species (i.e., aromatic and carbonyl C). Ramp CP/phase-adjusted side band suppressing (PASS) is a pulse sequence that integrates SSBs separately and quantitatively recovers them into their inherent center peaks. In the present study, the applicability of ramp CP/PASS to NOM analysis was compared with direct polarization (DPMAS), another quantitative method but one that requires a long operation time, and/or a ramp CP/total suppression side band (ramp CP/TOSS) technique, a popular but non-quantitative method for deleting SSBs. The test materials were six soil humic acid samples with various known degrees of aromaticity and two fulvic acids. There were no significant differences in the relative abundance of alkyl C, O-alkyl C, and aromatic C between the ramp CP/PASS and DPMAS methods, while the signal intensities corresponding to aromatic C in the ramp CP/TOSS spectra were consistently less than the values obtained in the ramp CP/PASS spectra. These results indicate that ramp CP/PASS can be used to accurately estimate the C composition of NOM samples.

  5. Estimation methods and monitoring network issues in the quantitative estimation of land-based COD and TN loads entering the sea: a case study in Qingdao City, China.

    Science.gov (United States)

    Su, Ying; Wang, Xiulin; Li, Keqiang; Liang, Shengkang; Qian, Guodong; Jin, Hong; Dai, Aiquan

    2014-09-01

    At present, the monitoring network of China cannot provide sufficient data to estimate land-based pollutant loads that enter the sea, and estimation methods are imprecisely used. In this study, the selection of monitoring stations, monitoring frequency, and pollutant load estimation methods was studied in Qingdao City, a typical coastal city in China, taken as an example. Land-based pollutant loads from Qingdao were estimated, and load distribution, density, and composition were analyzed to identify the key pollution source regions (SRs) that need to be monitored and controlled. Results show that the administrative land area of Qingdao can be divided into 25 sea-sink source regions (SSRs). A total of 14 more rivers and 62 industrial enterprises should be monitored to determine the comprehensive pollutant loads of the city. Furthermore, the monitoring frequency of rivers should not be less than three times/year; a monitoring frequency of five or more times is preferable. The findings on pollutant load estimation with the use of different estimation methods substantially vary; estimation results with the use of ratio-based methods were 10 and 22 % higher than those with the use of monitoring-based methods in terms of chemical oxygen demand (COD) and total nitrogen (TN), respectively. None-point sources contributed the majority of the pollutant loads at about 70 % of the total COD and 60 % of the total TN.

  6. Comparison of quantitative k-edge empirical estimators using an energy-resolved photon-counting detector

    Science.gov (United States)

    Zimmerman, Kevin C.; Gilat Schmidt, Taly

    2016-03-01

    Using an energy-resolving photon counting detector, the amount of k-edge material in the x-ray path can be estimated using a process known as material decomposition. However, non-ideal effects within the detector make it difficult to accurately perform this decomposition. This work evaluated the k-edge material decomposition accuracy of two empirical estimators. A neural network estimator and a linearized maximum likelihood estimator with error look-up tables (A-table method) were evaluated through simulations and experiments. Each estimator was trained on system-specific calibration data rather than specific modeling of non-ideal detector effects or the x-ray source spectrum. Projections through a step-wedge calibration phantom consisting of different path lengths through PMMA, aluminum, and a k-edge material was used to train the estimators. The estimators were tested by decomposing data acquired through different path lengths of the basis materials. The estimators had similar performance in the chest phantom simulations with gadolinium. They estimated four of the five densities of gadolinium with less than 2mg/mL bias. The neural networks estimates demonstrated lower bias but higher variance than the A-table estimates in the iodine contrast agent simulations. The neural networks had an experimental variance lower than the CRLB indicating it is a biased estimator. In the experimental study, the k-edge material contribution was estimated with less than 14% bias for the neural network estimator and less than 41% bias for the A-table method.

  7. Variation of δ18O and δD in precipitation and stream waters across the Kashmir Himalaya (India) to distinguish and estimate the seasonal sources of stream flow

    Science.gov (United States)

    Jeelani, Gh.; Saravana Kumar, U.; Kumar, Bhishm

    2013-02-01

    SummaryThe spatial and temporal distribution of δ18O and δD measurements of precipitation and stream waters were used to distinguish various sources and components of stream flow and to estimate their residence times in snow dominated mountainous catchments of Kashmir Himalaya. A marked spatial and seasonal variability of stable isotopes of oxygen and hydrogen was observed in precipitation with δ18O and δD varied from -12.98‰ to -0.58‰ and -74.5‰ to -11.1‰, respectively during the period from November 2007 to January 2009. The seasonal changes in stable isotopes of precipitation with depleted and enriched 18O and 2H in January/March/May and July/September/November, respectively at each site are attributed to the seasonal changes in ambient temperature, precipitation, source of moisture and airmass trajectory. The mean altitude effect of -0.23‰ and -1.2‰ per 100 m change in elevation for δ18O and δD, respectively, was observed based on amount weighted mean precipitation isotopic composition data. Unlike precipitation, less variability of stable isotopes of streams was found with δ18O and δD ranging from -11.56‰ to -6.26‰ and -65.4‰ to -36.4‰, respectively, the depleted values being observed in the headwaters of the streams/tributaries and enriched values at lower elevations of the watersheds. The LMWL established for the Kashmir Himalayas, based on amount weighted monthly samples is δD = 7.59 (± 0.32) × δ18O + 11.79 (± 2.07) (r2 = 0.96) with lower slope and intercept than GMWL, is very close to the LMWL for the western Himalayas. The seasonal regression lines suggest the effect of evaporation with lower slopes and intercepts except in winter. The results suggest that the winter precipitation (snow) dominantly contributes the annual stream flow with average snowmelt contribution of about 29% in early spring, 66% in late spring, 61% in summer while the baseflow contribution is found in the order of 40% in autumn season. The mean stream

  8. Validation of Satellite Precipitation (trmm 3B43) in Ecuadorian Coastal Plains, Andean Highlands and Amazonian Rainforest

    Science.gov (United States)

    Ballari, D.; Castro, E.; Campozano, L.

    2016-06-01

    Precipitation monitoring is of utmost importance for water resource management. However, in regions of complex terrain such as Ecuador, the high spatio-temporal precipitation variability and the scarcity of rain gauges, make difficult to obtain accurate estimations of precipitation. Remotely sensed estimated precipitation, such as the Multi-satellite Precipitation Analysis TRMM, can cope with this problem after a validation process, which must be representative in space and time. In this work we validate monthly estimates from TRMM 3B43 satellite precipitation (0.25° x 0.25° resolution), by using ground data from 14 rain gauges in Ecuador. The stations are located in the 3 most differentiated regions of the country: the Pacific coastal plains, the Andean highlands, and the Amazon rainforest. Time series, between 1998 - 2010, of imagery and rain gauges were compared using statistical error metrics such as bias, root mean square error, and Pearson correlation; and with detection indexes such as probability of detection, equitable threat score, false alarm rate and frequency bias index. The results showed that precipitation seasonality is well represented and TRMM 3B43 acceptably estimates the monthly precipitation in the three regions of the country. According to both, statistical error metrics and detection indexes, the coastal and Amazon regions are better estimated quantitatively than the Andean highlands. Additionally, it was found that there are better estimations for light precipitation rates. The present validation of TRMM 3B43 provides important results to support further studies on calibration and bias correction of precipitation in ungagged watershed basins.

  9. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-07-06

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  10. Quantitative coronary angiography in the estimation of the functional significance of coronary stenosis: correlations with dobutamine-atropine stress test

    NARCIS (Netherlands)

    J.M.P. Baptista da Silva (José); M. Arnese (Mariarosaria); J.R.T.C. Roelandt (Jos); P.M. Fioretti (Paolo); D.T.J. Keane (David); J. Escaned (Javier); C. di Mario (Carlo); P.W.J.C. Serruys (Patrick); H. Boersma (Eric)

    1994-01-01

    textabstractOBJECTIVES. The purpose of this study was to determine the predictive value of quantitative coronary angiography in the assessment of the functional significance of coronary stenosis as judged from the development of left ventricular wall motion abnormalities during dobutamine-atropine

  11. Quantitative estimation of the influence of external vibrations on the measurement error of a coriolis mass-flow meter

    NARCIS (Netherlands)

    Ridder, van de L.; Hakvoort, W.B.J.; Dijk, van J.; Lötters, J.C.; Boer, de A.; Dimitrovova, Z.; Almeida, de J.R.

    2013-01-01

    In this paper the quantitative influence of external vibrations on the measurement value of a Coriolis Mass-Flow Meter for low flows is investigated, with the eventual goal to reduce the influence of vibrations. Model results are compared with experimental results to improve the knowledge on how ext

  12. Quantitative approaches for health risk assessment of environmental pollutants : estimation of differences in sensitivity, relative potencies, and margins of exposure

    OpenAIRE

    Kalantari, Fereshteh

    2012-01-01

    Historically, quantitative health risk assessment of chemical substances is based on deterministic approaches. For a more realistic and informative health risk assessment, however, the variability and uncertainty inherent in measurements should be taken into consideration. The variability and uncertainty may be assessed by applying probabilistic methods when performing dose-response assessment, exposure assessment and risk characterization. The benchmark dose (BMD) method has b...

  13. Quantitative coronary angiography in the estimation of the functional significance of coronary stenosis: correlations with dobutamine-atropine stress test

    NARCIS (Netherlands)

    J.M.P. Baptista da Silva (José); M. Arnese (Mariarosaria); J.R.T.C. Roelandt (Jos); P.M. Fioretti (Paolo); D.T.J. Keane (David); J. Escaned (Javier); C. di Mario (Carlo); P.W.J.C. Serruys (Patrick); H. Boersma (Eric)

    1994-01-01

    textabstractOBJECTIVES. The purpose of this study was to determine the predictive value of quantitative coronary angiography in the assessment of the functional significance of coronary stenosis as judged from the development of left ventricular wall motion abnormalities during dobutamine-atropine s

  14. Water-soluble primary amine compounds in rural continental precipitation

    Science.gov (United States)

    Gorzelska, Krystyna; Galloway, James N.; Watterson, Karen; Keene, William C.

    Procedures for collecting, storing and analysing precipitation samples for organic nitrogen studies were developed. These procedures preserve chemical integrities of the species of interest, allow for up to 3 months storage and quantitative determination of water-soluble primary amine compounds, with the overall error at the 2 nM detection limit of less than 30%. This methodology was applied to study amino compounds in precipitation samples collected over a period of one year in central Virginia. Nitrogen concentrations of 13 amino acids and 3 aliphatic amines were summed to calculate the total amine nitrogen (TAN). The concentration of TAN ranged from below our detection level to 6658 nM, and possibly reflected a seasonal variation in the source strength of the atmospheric amines. Overall, the most commonly occurring amino compounds were methyl amine, ethyl amine, glutamic acid, glycine and serine. On average, the highest overall contribution to the TAN came from arginine, asparagine, glutamine, methyl amine, serine and alanine. However, large qualitative and quantitative variations observed among samples warrant caution in interpretation and application of the averaged values. TAN in Charlottesville precipitation contributed from less than 1 to ca 10% of the ammonium nitrogen level. However, our estimates show that amino compounds may contribute significantly to reduced nitrogen budget in precipitation in remote regions.

  15. Uncertainty in runoff based on Global Climate Model precipitation and temperature data – Part 2: Estimation and uncertainty of annual runoff and reservoir yield

    Directory of Open Access Journals (Sweden)

    M. C. Peel

    2014-05-01

    Full Text Available Two key sources of uncertainty in projections of future runoff for climate change impact assessments are uncertainty between Global Climate Models (GCMs and within a GCM. Within-GCM uncertainty is the variability in GCM output that occurs when running a scenario multiple times but each run has slightly different, but equally plausible, initial conditions. The limited number of runs available for each GCM and scenario combination within the Coupled Model Intercomparison Project phase 3 (CMIP3 and phase 5 (CMIP5 datasets, limits the assessment of within-GCM uncertainty. In this second of two companion papers, the primary aim is to approximate within-GCM uncertainty of monthly precipitation and temperature projections and assess its impact on modelled runoff for climate change impact assessments. A secondary aim is to assess the impact of between-GCM uncertainty on modelled runoff. Here we approximate within-GCM uncertainty by developing non-stationary stochastic replicates of GCM monthly precipitation and temperature data. These replicates are input to an off-line hydrologic model to assess the impact of within-GCM uncertainty on projected annual runoff and reservoir yield. To-date within-GCM uncertainty has received little attention in the hydrologic climate change impact literature and this analysis provides an approximation of the uncertainty in projected runoff, and reservoir yield, due to within- and between-GCM uncertainty of precipitation and temperature projections. In the companion paper, McMahon et al. (2014 sought to reduce between-GCM uncertainty by removing poorly performing GCMs, resulting in a selection of five better performing GCMs from CMIP3 for use in this paper. Here we present within- and between-GCM uncertainty results in mean annual precipitation (MAP, temperature (MAT and runoff (MAR, the standard deviation of annual precipitation (SDP and runoff (SDR and reservoir yield for five CMIP3 GCMs at 17 world-wide catchments

  16. A model for estimating rains' area, using the dependence of the time correlation of sites' monthly precipitation totals on the distance between sites

    Science.gov (United States)

    Walanus, Adam; Cebulska, Marta; Twardosz, Robert

    2016-05-01

    Based on the monthly precipitation series from 16 sites (in the Polish Carpathian Mountains), of 132 years' length, a relatively precise scatterplot of correlation coefficients between sites versus distance between sites is obtained. The "rains" of Gaussian shape, in the spatial sense, are a good model, which produces a scatterplot very closely resembling the observed one. The essential parameter of the model is the area covered by the modeled rains, which results to be of order 30-50 km, though with about a twice lower value for the N-S direction.

  17. Quantitative estimation of bioclimatic parameters from presence/absence vegetation data in North America by the modern analog technique

    Science.gov (United States)

    Thompson, R.S.; Anderson, K.H.; Bartlein, P.J.

    2008-01-01

    The method of modern analogs is widely used to obtain estimates of past climatic conditions from paleobiological assemblages, and despite its frequent use, this method involved so-far untested assumptions. We applied four analog approaches to a continental-scale set of bioclimatic and plant-distribution presence/absence data for North America to assess how well this method works under near-optimal modern conditions. For each point on the grid, we calculated the similarity between its vegetation assemblage and those of all other points on the grid (excluding nearby points). The climate of the points with the most similar vegetation was used to estimate the climate at the target grid point. Estimates based the use of the Jaccard similarity coefficient had smaller errors than those based on the use of a new similarity coefficient, although the latter may be more robust because it does not assume that the "fossil" assemblage is complete. The results of these analyses indicate that presence/absence vegetation assemblages provide a valid basis for estimating bioclimates on the continental scale. However, the accuracy of the estimates is strongly tied to the number of species in the target assemblage, and the analog method is necessarily constrained to produce estimates that fall within the range of observed values. We applied the four modern analog approaches and the mutual overlap (or "mutual climatic range") method to estimate<