WorldWideScience

Sample records for quantitative precipitation estimates

  1. Radar-Derived Quantitative Precipitation Estimation Based on Precipitation Classification

    Directory of Open Access Journals (Sweden)

    Lili Yang

    2016-01-01

    Full Text Available A method for improving radar-derived quantitative precipitation estimation is proposed. Tropical vertical profiles of reflectivity (VPRs are first determined from multiple VPRs. Upon identifying a tropical VPR, the event can be further classified as either tropical-stratiform or tropical-convective rainfall by a fuzzy logic (FL algorithm. Based on the precipitation-type fields, the reflectivity values are converted into rainfall rate using a Z-R relationship. In order to evaluate the performance of this rainfall classification scheme, three experiments were conducted using three months of data and two study cases. In Experiment I, the Weather Surveillance Radar-1988 Doppler (WSR-88D default Z-R relationship was applied. In Experiment II, the precipitation regime was separated into convective and stratiform rainfall using the FL algorithm, and corresponding Z-R relationships were used. In Experiment III, the precipitation regime was separated into convective, stratiform, and tropical rainfall, and the corresponding Z-R relationships were applied. The results show that the rainfall rates obtained from all three experiments match closely with the gauge observations, although Experiment II could solve the underestimation, when compared to Experiment I. Experiment III significantly reduced this underestimation and generated the most accurate radar estimates of rain rate among the three experiments.

  2. A test for Improvement of high resolution Quantitative Precipitation Estimation for localized heavy precipitation events

    Science.gov (United States)

    Lee, Jung-Hoon; Roh, Joon-Woo; Park, Jeong-Gyun

    2017-04-01

    Accurate estimation of precipitation is one of the most difficult and significant tasks in the area of weather diagnostic and forecasting. In the Korean Peninsula, heavy precipitations are caused by various physical mechanisms, which are affected by shortwave trough, quasi-stationary moisture convergence zone among varying air masses, and a direct/indirect effect of tropical cyclone. In addition to, various geographical and topographical elements make production of temporal and spatial distribution of precipitation is very complicated. Especially, localized heavy rainfall events in South Korea generally arise from mesoscale convective systems embedded in these synoptic scale disturbances. In weather radar data with high temporal and spatial resolution, accurate estimation of rain rate from radar reflectivity data is too difficult. Z-R relationship (Marshal and Palmer 1948) have adapted representatively. In addition to, several methods such as support vector machine (SVM), neural network, Fuzzy logic, Kriging were utilized in order to improve the accuracy of rain rate. These methods show the different quantitative precipitation estimation (QPE) and the performances of accuracy are different for heavy precipitation cases. In this study, in order to improve the accuracy of QPE for localized heavy precipitation, ensemble method for Z-R relationship and various techniques was tested. This QPE ensemble method was developed by a concept based on utilizing each advantage of precipitation calibration methods. The ensemble members were produced for a combination of different Z-R coefficient and calibration method.

  3. Improving Satellite Quantitative Precipitation Estimation Using GOES-Retrieved Cloud Optical Depth

    Energy Technology Data Exchange (ETDEWEB)

    Stenz, Ronald; Dong, Xiquan; Xi, Baike; Feng, Zhe; Kuligowski, Robert J.

    2016-02-01

    To address significant gaps in ground-based radar coverage and rain gauge networks in the U.S., geostationary satellite quantitative precipitation estimates (QPEs) such as the Self-Calibrating Multivariate Precipitation Retrievals (SCaMPR) can be used to fill in both the spatial and temporal gaps of ground-based measurements. Additionally, with the launch of GOES-R, the temporal resolution of satellite QPEs may be comparable to that of Weather Service Radar-1988 Doppler (WSR-88D) volume scans as GOES images will be available every five minutes. However, while satellite QPEs have strengths in spatial coverage and temporal resolution, they face limitations particularly during convective events. Deep Convective Systems (DCSs) have large cloud shields with similar brightness temperatures (BTs) over nearly the entire system, but widely varying precipitation rates beneath these clouds. Geostationary satellite QPEs relying on the indirect relationship between BTs and precipitation rates often suffer from large errors because anvil regions (little/no precipitation) cannot be distinguished from rain-cores (heavy precipitation) using only BTs. However, a combination of BTs and optical depth (τ) has been found to reduce overestimates of precipitation in anvil regions (Stenz et al. 2014). A new rain mask algorithm incorporating both τ and BTs has been developed, and its application to the existing SCaMPR algorithm was evaluated. The performance of the modified SCaMPR was evaluated using traditional skill scores and a more detailed analysis of performance in individual DCS components by utilizing the Feng et al. (2012) classification algorithm. SCaMPR estimates with the new rain mask applied benefited from significantly reduced overestimates of precipitation in anvil regions and overall improvements in skill scores.

  4. Radar-derived quantitative precipitation estimation in complex terrain over the eastern Tibetan Plateau

    Science.gov (United States)

    Gou, Yabin; Ma, Yingzhao; Chen, Haonan; Wen, Yixin

    2018-05-01

    Quantitative precipitation estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex spatial and temporal variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3264 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profile of reflectivity (VPR) clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method for all precipitation events in terms of score comparison using validation gauge measurements as references. It is also found that the SCIT-based approach can effectively mitigate the local error of radar QPE and represent the precipitation spatiotemporal variability better than the RT-based scheme.

  5. The Relative Performance of High Resolution Quantitative Precipitation Estimates in the Russian River Basin

    Science.gov (United States)

    Bytheway, J. L.; Biswas, S.; Cifelli, R.; Hughes, M.

    2017-12-01

    The Russian River carves a 110 mile path through Mendocino and Sonoma counties in western California, providing water for thousands of residents and acres of agriculture as well as a home for several species of endangered fish. The Russian River basin receives almost all of its precipitation during the October through March wet season, and the systems bringing this precipitation are often impacted by atmospheric river events as well as the complex topography of the region. This study will examine the performance of several high resolution (hourly, products and forecasts over the 2015-2016 and 2016-2017 wet seasons. Comparisons of event total rainfall as well as hourly rainfall will be performed using 1) rain gauges operated by the National Oceanic and Atmospheric Administration (NOAA) Physical Sciences Division (PSD), 2) products from the Multi-Radar/Multi-Sensor (MRMS) QPE dataset, and 3) quantitative precipitation forecasts from the High Resolution Rapid Refresh (HRRR) model at 1, 3, 6, and 12 hour lead times. Further attention will be given to cases or locations representing large disparities between the estimates.

  6. Parameter estimation using the genetic algorithm and its impact on quantitative precipitation forecast

    Directory of Open Access Journals (Sweden)

    Y. H. Lee

    2006-12-01

    Full Text Available In this study, optimal parameter estimations are performed for both physical and computational parameters in a mesoscale meteorological model, and their impacts on the quantitative precipitation forecasting (QPF are assessed for a heavy rainfall case occurred at the Korean Peninsula in June 2005. Experiments are carried out using the PSU/NCAR MM5 model and the genetic algorithm (GA for two parameters: the reduction rate of the convective available potential energy in the Kain-Fritsch (KF scheme for cumulus parameterization, and the Asselin filter parameter for numerical stability. The fitness function is defined based on a QPF skill score. It turns out that each optimized parameter significantly improves the QPF skill. Such improvement is maximized when the two optimized parameters are used simultaneously. Our results indicate that optimizations of computational parameters as well as physical parameters and their adequate applications are essential in improving model performance.

  7. Improving Radar Quantitative Precipitation Estimation over Complex Terrain in the San Francisco Bay Area

    Science.gov (United States)

    Cifelli, R.; Chen, H.; Chandrasekar, V.

    2017-12-01

    A recent study by the State of California's Department of Water Resources has emphasized that the San Francisco Bay Area is at risk of catastrophic flooding. Therefore, accurate quantitative precipitation estimation (QPE) and forecast (QPF) are critical for protecting life and property in this region. Compared to rain gauge and meteorological satellite, ground based radar has shown great advantages for high-resolution precipitation observations in both space and time domain. In addition, the polarization diversity shows great potential to characterize precipitation microphysics through identification of different hydrometeor types and their size and shape information. Currently, all the radars comprising the U.S. National Weather Service (NWS) Weather Surveillance Radar-1988 Doppler (WSR-88D) network are operating in dual-polarization mode. Enhancement of QPE is one of the main considerations of the dual-polarization upgrade. The San Francisco Bay Area is covered by two S-band WSR-88D radars, namely, KMUX and KDAX. However, in complex terrain like the Bay Area, it is still challenging to obtain an optimal rainfall algorithm for a given set of dual-polarization measurements. In addition, the accuracy of rain rate estimates is contingent on additional factors such as bright band contamination, vertical profile of reflectivity (VPR) correction, and partial beam blockages. This presentation aims to improve radar QPE for the Bay area using advanced dual-polarization rainfall methodologies. The benefit brought by the dual-polarization upgrade of operational radar network is assessed. In addition, a pilot study of gap fill X-band radar performance is conducted in support of regional QPE system development. This paper also presents a detailed comparison between the dual-polarization radar-derived rainfall products with various operational products including the NSSL's Multi-Radar/Multi-Sensor (MRMS) system. Quantitative evaluation of various rainfall products is achieved

  8. Quantitative Precipitation Estimation over Ocean Using Bayesian Approach from Microwave Observations during the Typhoon Season

    Directory of Open Access Journals (Sweden)

    Jen-Chi Hu

    2009-01-01

    Full Text Available We have developed a new Bayesian approach to retrieve oceanic rain rate from the Tropical Rainfall Measuring Mission (TRMM Microwave Imager (TMI, with an emphasis on typhoon cases in the West Pacific. Retrieved rain rates are validated with measurements of rain gauges located on Japanese islands. To demonstrate improvement, retrievals are also compared with those from the TRMM/Precipitation Radar (PR, the Goddard Profiling Algorithm (GPROF, and a multi-channel linear regression statistical method (MLRS. We have found that qualitatively, all methods retrieved similar horizontal distributions in terms of locations of eyes and rain bands of typhoons. Quantitatively, our new Bayesian retrievals have the best linearity and the smallest root mean square (RMS error against rain gauge data for 16 typhoon over passes in 2004. The correlation coefficient and RMS of our retrievals are 0.95 and ~2 mm hr-1, respectively. In particular, at heavy rain rates, our Bayesian retrievals out perform those retrieved from GPROF and MLRS. Over all, the new Bayesian approach accurately retrieves surface rain rate for typhoon cases. Ac cu rate rain rate estimates from this method can be assimilated in models to improve forecast and prevent potential damages in Taiwan during typhoon seasons.

  9. Quantitative precipitation estimation in complex orography using quasi-vertical profiles of dual polarization radar variables

    Science.gov (United States)

    Montopoli, Mario; Roberto, Nicoletta; Adirosi, Elisa; Gorgucci, Eugenio; Baldini, Luca

    2017-04-01

    Weather radars are nowadays a unique tool to estimate quantitatively the rain precipitation near the surface. This is an important task for a plenty of applications. For example, to feed hydrological models, mitigate the impact of severe storms at the ground using radar information in modern warning tools as well as aid the validation studies of satellite-based rain products. With respect to the latter application, several ground validation studies of the Global Precipitation Mission (GPM) products have recently highlighted the importance of accurate QPE from ground-based weather radars. To date, a plenty of works analyzed the performance of various QPE algorithms making use of actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization variables not only to ensure a good level of radar data quality but also as a direct input in the rain estimation equations. Among others, one of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution that affects at different levels, all the radar variables acquired as well as rain rates. This is particularly impactful in mountainous areas where the altitudes of the radar sampling is likely several hundred of meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested a in complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that make use of the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered because in that case all the radar variables used in the rain estimation process should be consistently extrapolated at the surface

  10. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    Science.gov (United States)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  11. Investigation of Weather Radar Quantitative Precipitation Estimation Methodologies in Complex Orography

    Directory of Open Access Journals (Sweden)

    Mario Montopoli

    2017-02-01

    Full Text Available Near surface quantitative precipitation estimation (QPE from weather radar measurements is an important task for feeding hydrological models, limiting the impact of severe rain events at the ground as well as aiding validation studies of satellite-based rain products. To date, several works have analyzed the performance of various QPE algorithms using actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization radar variables not only to ensure a good level of data quality but also as a direct input to rain estimation equations. One of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution, which affects all the acquired radar variables as well as estimated rain rates at different levels. This is particularly impactful in mountainous areas, where the sampled altitudes are likely several hundred meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested in a complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that use the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered. In that case, all the radar variables used in the rain estimation process should be consistently extrapolated at the surface to try and maintain the correlations among them. To avoid facing such a complexity, especially with a view to operational implementation, we propose looking at the features of the vertical profile of rain (VPR, i.e., after performing the rain estimation. This procedure allows characterization of a single variable (i.e., rain when dealing with

  12. Skill Assessment of An Hybrid Technique To Estimate Quantitative Precipitation Forecast For Galicia (nw Spain)

    Science.gov (United States)

    Lage, A.; Taboada, J. J.

    Precipitation is the most obvious of the weather elements in its effects on normal life. Numerical weather prediction (NWP) is generally used to produce quantitative precip- itation forecast (QPF) beyond the 1-3 h time frame. These models often fail to predict small-scale variations of rain because of spin-up problems and their coarse spatial and temporal resolution (Antolik, 2000). Moreover, there are some uncertainties about the behaviour of the NWP models in extreme situations (de Bruijn and Brandsma, 2000). Hybrid techniques, combining the benefits of NWP and statistical approaches in a flexible way, are very useful to achieve a good QPF. In this work, a new technique of QPF for Galicia (NW of Spain) is presented. This region has a percentage of rainy days per year greater than 50% with quantities that may cause floods, with human and economical damages. The technique is composed of a NWP model (ARPS) and a statistical downscaling process based on an automated classification scheme of at- mospheric circulation patterns for the Iberian Peninsula (J. Ribalaygua and R. Boren, 1995). Results show that QPF for Galicia is improved using this hybrid technique. [1] Antolik, M.S. 2000 "An Overview of the National Weather Service's centralized statistical quantitative precipitation forecasts". Journal of Hydrology, 239, pp:306- 337. [2] de Bruijn, E.I.F and T. Brandsma "Rainfall prediction for a flooding event in Ireland caused by the remnants of Hurricane Charley". Journal of Hydrology, 239, pp:148-161. [3] Ribalaygua, J. and Boren R. "Clasificación de patrones espaciales de precipitación diaria sobre la España Peninsular". Informes N 3 y 4 del Servicio de Análisis e Investigación del Clima. Instituto Nacional de Meteorología. Madrid. 53 pp.

  13. Quantitative precipitation estimation based on high-resolution numerical weather prediction and data assimilation with WRF – a performance test

    Directory of Open Access Journals (Sweden)

    Hans-Stefan Bauer

    2015-04-01

    Full Text Available Quantitative precipitation estimation and forecasting (QPE and QPF are among the most challenging tasks in atmospheric sciences. In this work, QPE based on numerical modelling and data assimilation is investigated. Key components are the Weather Research and Forecasting (WRF model in combination with its 3D variational assimilation scheme, applied on the convection-permitting scale with sophisticated model physics over central Europe. The system is operated in a 1-hour rapid update cycle and processes a large set of in situ observations, data from French radar systems, the European GPS network and satellite sensors. Additionally, a free forecast driven by the ECMWF operational analysis is included as a reference run representing current operational precipitation forecasting. The verification is done both qualitatively and quantitatively by comparisons of reflectivity, accumulated precipitation fields and derived verification scores for a complex synoptic situation that developed on 26 and 27 September 2012. The investigation shows that even the downscaling from ECMWF represents the synoptic situation reasonably well. However, significant improvements are seen in the results of the WRF QPE setup, especially when the French radar data are assimilated. The frontal structure is more defined and the timing of the frontal movement is improved compared with observations. Even mesoscale band-like precipitation structures on the rear side of the cold front are reproduced, as seen by radar. The improvement in performance is also confirmed by a quantitative comparison of the 24-hourly accumulated precipitation over Germany. The mean correlation of the model simulations with observations improved from 0.2 in the downscaling experiment and 0.29 in the assimilation experiment without radar data to 0.56 in the WRF QPE experiment including the assimilation of French radar data.

  14. Processing of next generation weather radar-multisensor precipitation estimates and quantitative precipitation forecast data for the DuPage County streamflow simulation system

    Science.gov (United States)

    Bera, Maitreyee; Ortel, Terry W.

    2018-01-12

    The U.S. Geological Survey, in cooperation with DuPage County Stormwater Management Department, is testing a near real-time streamflow simulation system that assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek and West Branch DuPage River drainage basins in DuPage County, Illinois. As part of this effort, the U.S. Geological Survey maintains a database of hourly meteorological and hydrologic data for use in this near real-time streamflow simulation system. Among these data are next generation weather radar-multisensor precipitation estimates and quantitative precipitation forecast data, which are retrieved from the North Central River Forecasting Center of the National Weather Service. The DuPage County streamflow simulation system uses these quantitative precipitation forecast data to create streamflow predictions for the two simulated drainage basins. This report discusses in detail how these data are processed for inclusion in the Watershed Data Management files used in the streamflow simulation system for the Salt Creek and West Branch DuPage River drainage basins.

  15. Disdrometer-based C-Band Radar Quantitative Precipitation Estimation (QPE) in a highly complex terrain region in tropical Colombia.

    Science.gov (United States)

    Sepúlveda, J.; Hoyos Ortiz, C. D.

    2017-12-01

    An adequate quantification of precipitation over land is critical for many societal applications including agriculture, hydroelectricity generation, water supply, and risk management associated with extreme events. The use of rain gauges, a traditional method for precipitation estimation, and an excellent one, to estimate the volume of liquid water during a particular precipitation event, does not allow to fully capture the highly spatial variability of the phenomena which is a requirement for almost all practical applications. On the other hand, the weather radar, an active remote sensing sensor, provides a proxy for rainfall with fine spatial resolution and adequate temporary sampling, however, it does not measure surface precipitation. In order to fully exploit the capabilities of the weather radar, it is necessary to develop quantitative precipitation estimation (QPE) techniques combining radar information with in-situ measurements. Different QPE methodologies are explored and adapted to local observations in a highly complex terrain region in tropical Colombia using a C-Band radar and a relatively dense network of rain gauges and disdrometers. One important result is that the expressions reported in the literature for extratropical locations are not representative of the conditions found in the tropical region studied. In addition to reproducing the state-of-the-art techniques, a new multi-stage methodology based on radar-derived variables and disdrometer data is proposed in order to achieve the best QPE possible. The main motivation for this new methodology is based on the fact that most traditional QPE methods do not directly take into account the different uncertainty sources involved in the process. The main advantage of the multi-stage model compared to traditional models is that it allows assessing and quantifying the uncertainty in the surface rain rate estimation. The sub-hourly rainfall estimations using the multi-stage methodology are realistic

  16. Merging Radar Quantitative Precipitation Estimates (QPEs) from the High-resolution NEXRAD Reanalysis over CONUS with Rain-gauge Observations

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.

    2015-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.

  17. Evaluation of two "integrated" polarimetric Quantitative Precipitation Estimation (QPE) algorithms at C-band

    Science.gov (United States)

    Tabary, Pierre; Boumahmoud, Abdel-Amin; Andrieu, Hervé; Thompson, Robert J.; Illingworth, Anthony J.; Le Bouar, Erwan; Testud, Jacques

    2011-08-01

    SummaryTwo so-called "integrated" polarimetric rate estimation techniques, ZPHI ( Testud et al., 2000) and ZZDR ( Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term "integrated" means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282 R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h -1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R1.66), a -19% underestimation with ZPHI and a +23

  18. An operational weather radar-based Quantitative Precipitation Estimation and its application in catchment water resources modeling

    DEFF Research Database (Denmark)

    He, Xin; Vejen, Flemming; Stisen, Simon

    2011-01-01

    of precipitation compared with rain-gauge-based methods, thus providing the basis for better water resources assessments. The radar QPE algorithm called ARNE is a distance-dependent areal estimation method that merges radar data with ground surface observations. The method was applied to the Skjern River catchment...... in western Denmark where alternative precipitation estimates were also used as input to an integrated hydrologic model. The hydrologic responses from the model were analyzed by comparing radar- and ground-based precipitation input scenarios. Results showed that radar QPE products are able to generate...... reliable simulations of stream flow and water balance. The potential of using radar-based precipitation was found to be especially high at a smaller scale, where the impact of spatial resolution was evident from the stream discharge results. Also, groundwater recharge was shown to be sensitive...

  19. Mesoscale and Local Scale Evaluations of Quantitative Precipitation Estimates by Weather Radar Products during a Heavy Rainfall Event

    Directory of Open Access Journals (Sweden)

    Basile Pauthier

    2016-01-01

    Full Text Available A 24-hour heavy rainfall event occurred in northeastern France from November 3 to 4, 2014. The accuracy of the quantitative precipitation estimation (QPE by PANTHERE and ANTILOPE radar-based gridded products during this particular event, is examined at both mesoscale and local scale, in comparison with two reference rain-gauge networks. Mesoscale accuracy was assessed for the total rainfall accumulated during the 24-hour event, using the Météo France operational rain-gauge network. Local scale accuracy was assessed for both total event rainfall and hourly rainfall accumulations, using the recently developed HydraVitis high-resolution rain gauge network Evaluation shows that (1 PANTHERE radar-based QPE underestimates rainfall fields at mesoscale and local scale; (2 both PANTHERE and ANTILOPE successfully reproduced the spatial variability of rainfall at local scale; (3 PANTHERE underestimates can be significantly improved at local scale by merging these data with rain gauge data interpolation (i.e., ANTILOPE. This study provides a preliminary evaluation of radar-based QPE at local scale, suggesting that merged products are invaluable for applications at very high resolution. The results obtained underline the importance of using high-density rain-gauge networks to obtain information at high spatial and temporal resolution, for better understanding of local rainfall variation, to calibrate remotely sensed rainfall products.

  20. Precipitation evidences on X-Band Synthetic Aperture Radar imagery: an approach for quantitative detection and estimation

    Science.gov (United States)

    Mori, Saverio; Marzano, Frank S.; Montopoli, Mario; Pulvirenti, Luca; Pierdicca, Nazzareno

    2017-04-01

    al. 2014 and Mori et al. 2012); ancillary data, such as local incident angle and land cover, are used. This stage is necessary to tune the precipitation map stage and to avoid severe misinterpretations on the precipitation map routines. The second stage consist of estimating the local cloud attenuation. Finally the precipitation map is estimated, using the the retrieval algorithm developed by Marzano et al. (2011), applied only to pixels where rain is known to be present. Within the FP7 project EartH2Observe we have applied this methodology to 14 study cases, acquired within TSX and CSK missions over Italy and United States. This choice allows analysing both hurricane-like intense events and continental mid-latitude precipitations, with the possibility to verify and validate the proposed methodology through the available weather radar networks. Moreover it allows in same extent analysing the contribution of orography and quality of ancillary data (i.e. landcover). In this work we will discuss the results obtained until now in terms of improved rain cell localization and precipitation quantification.

  1. Comparison Of Quantitative Precipitation Estimates Derived From Rain Gauge And Radar Derived Algorithms For Operational Flash Flood Support.

    Science.gov (United States)

    Streubel, D. P.; Kodama, K.

    2014-12-01

    To provide continuous flash flood situational awareness and to better differentiate severity of ongoing individual precipitation events, the National Weather Service Research Distributed Hydrologic Model (RDHM) is being implemented over Hawaii and Alaska. In the implementation process of RDHM, three gridded precipitation analyses are used as forcing. The first analysis is a radar only precipitation estimate derived from WSR-88D digital hybrid reflectivity, a Z-R relationship and aggregated into an hourly ¼ HRAP grid. The second analysis is derived from a rain gauge network and interpolated into an hourly ¼ HRAP grid using PRISM climatology. The third analysis is derived from a rain gauge network where rain gauges are assigned static pre-determined weights to derive a uniform mean areal precipitation that is applied over a catchment on a ¼ HRAP grid. To assess the effect of different QPE analyses on the accuracy of RDHM simulations and to potentially identify a preferred analysis for operational use, each QPE was used to force RDHM to simulate stream flow for 20 USGS peak flow events. An evaluation of the RDHM simulations was focused on peak flow magnitude, peak flow timing, and event volume accuracy to be most relevant for operational use. Results showed RDHM simulations based on the observed rain gauge amounts were more accurate in simulating peak flow magnitude and event volume relative to the radar derived analysis. However this result was not consistent for all 20 events nor was it consistent for a few of the rainfall events where an annual peak flow was recorded at more than one USGS gage. Implications of this indicate that a more robust QPE forcing with the inclusion of uncertainty derived from the three analyses may provide a better input for simulating extreme peak flow events.

  2. Combining weather radar nowcasts and numerical weather prediction models to estimate short-term quantitative precipitation and uncertainty

    DEFF Research Database (Denmark)

    Jensen, David Getreuer

    The topic of this Ph.D. thesis is short term forecasting of precipitation for up to 6 hours called nowcasts. The focus is on improving the precision of deterministic nowcasts, assimilation of radar extrapolation model (REM) data into Danish Meteorological Institutes (DMI) HIRLAM numerical weather...

  3. Quantitative estimation of orographic precipitation over the Himalayas by using TRMM/PR and a dense network of rain gauges

    Science.gov (United States)

    Yatagai, A.

    2009-04-01

    Precipitation Radar (PR) data acquired by the Tropical Rainfall Measuring Mission (TRMM) over 10 years of observation were used to show the monthly rainfall patterns over the Himalayas. To validate and adjust these patterns, we used a dense network of rain gauges to measure daily precipitation over Nepal, Bangladesh, Bhutan, Pakistan, India, Myanmar, and China. We then compared TRMM/PR and rain gauge data in 0.05-degree grid cells (an approximately 5.5-km mesh). Compared with the rain gauge observations, the PR systematically underestimated precipitation by 28-38% in summer (July-September).Significant correlation between TRMM/PR and RG data was found for all months, but the correlation is relatively low in winter. The relationship is investigated for different elevation zones, and the PR was found to underestimate RG data in most zones, except for certain zones in February (250-1000m), March (0-1000m), and April (0-1500m). Monthly PR climatology was adjusted on the basis of monthly regressions between the two sets of data and depicted.

  4. Long-Term Quantitative Precipitation Estimates (QPE) at High Spatial and Temporal Resolution over CONUS: Bias-Adjustment of the Radar-Only National Mosaic and Multi-sensor QPE (NMQ/Q2) Precipitation Reanalysis (2001-2012)

    Science.gov (United States)

    Prat, Olivier; Nelson, Brian; Stevens, Scott; Seo, Dong-Jun; Kim, Beomgeun

    2015-04-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (NEXRAD) network over Continental United States (CONUS) is completed for the period covering from 2001 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Several in-situ datasets are available to assess the biases of the radar-only product and to adjust for those biases to provide a multi-sensor QPE. The rain gauge networks that are used such as the Global Historical Climatology Network-Daily (GHCN-D), the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), and the Climate Reference Network (CRN), have different spatial density and temporal resolution. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. The objective of this work is threefold. First, we investigate how the different in-situ networks can impact the precipitation estimates as a function of the spatial density, sensor type, and temporal resolution. Second, we assess conditional and un-conditional biases of the radar-only QPE for various time scales (daily, hourly, 5-min) using in-situ precipitation observations. Finally, after assessing the bias and applying reduction or elimination techniques, we are using a unique in-situ dataset merging the different RG networks (CRN, ASOS, HADS, GHCN-D) to

  5. Radar-based quantitative precipitation estimation for the identification of debris flow occurrence over earthquake-affected regions in Sichuan, China

    Science.gov (United States)

    Shi, Zhao; Wei, Fangqiang; Chandrasekar, Venkatachalam

    2018-03-01

    Both Ms 8.0 Wenchuan earthquake on 12 May 2008 and Ms 7.0 Lushan earthquake on 20 April 2013 occurred in the province of Sichuan, China. In the earthquake-affected mountainous area, a large amount of loose material caused a high occurrence of debris flow during the rainy season. In order to evaluate the rainfall intensity-duration (I-D) threshold of the debris flow in the earthquake-affected area, and to fill up the observational gaps caused by the relatively scarce and low-altitude deployment of rain gauges in this area, raw data from two S-band China New Generation Doppler Weather Radar (CINRAD) were captured for six rainfall events that triggered 519 debris flows between 2012 and 2014. Due to the challenges of radar quantitative precipitation estimation (QPE) over mountainous areas, a series of improvement measures are considered: a hybrid scan mode, a vertical reflectivity profile (VPR) correction, a mosaic of reflectivity, a merged rainfall-reflectivity (R - Z) relationship for convective and stratiform rainfall, and rainfall bias adjustment with Kalman filter (KF). For validating rainfall accumulation over complex terrains, the study areas are divided into two kinds of regions by the height threshold of 1.5 km from the ground. Three kinds of radar rainfall estimates are compared with rain gauge measurements. It is observed that the normalized mean bias (NMB) is decreased by 39 % and the fitted linear ratio between radar and rain gauge observation reaches at 0.98. Furthermore, the radar-based I-D threshold derived by the frequentist method is I = 10.1D-0.52 and is underestimated by uncorrected raw radar data. In order to verify the impacts on observations due to spatial variation, I-D thresholds are identified from the nearest rain gauge observations and radar observations at the rain gauge locations. It is found that both kinds of observations have similar I-D thresholds and likewise underestimate I-D thresholds due to undershooting at the core of convective

  6. Estimating Tropical Cyclone Precipitation from Station Observations

    Institute of Scientific and Technical Information of China (English)

    REN Fumin; WANG Yongmei; WANG Xiaoling; LI Weijing

    2007-01-01

    In this paper, an objective technique for estimating the tropical cyclone (TC) precipitation from station observations is proposed. Based on a comparison between the Original Objective Method (OOM) and the Expert Subjective Method (ESM), the Objective Synoptic Analysis Technique (OSAT) for partitioning TC precipitation was developed by analyzing the western North Pacific (WNP) TC historical track and the daily precipitation datasets. Being an objective way of the ESM, OSAT overcomes the main problems in OOM,by changing two fixed parameters in OOM, the thresholds for the distance of the absolute TC precipitation (D0) and the TC size (D1), into variable parameters.Case verification for OSAT was also carried out by applying CMORPH (Climate Prediction Center MORPHing technique) daily precipitation measurements, which is NOAA's combined satellite precipitation measurement system. This indicates that OSAT is capable of distinguishing simultaneous TC precipitation rain-belts from those associated with different TCs or with middle-latitude weather systems.

  7. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  8. Short-range quantitative precipitation forecasting using Deep Learning approaches

    Science.gov (United States)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Predicting short-range quantitative precipitation is very important for flood forecasting, early flood warning and other hydrometeorological purposes. This study aims to improve the precipitation forecasting skills using a recently developed and advanced machine learning technique named Long Short-Term Memory (LSTM). The proposed LSTM learns the changing patterns of clouds from Cloud-Top Brightness Temperature (CTBT) images, retrieved from the infrared channel of Geostationary Operational Environmental Satellite (GOES), using a sophisticated and effective learning method. After learning the dynamics of clouds, the LSTM model predicts the upcoming rainy CTBT events. The proposed model is then merged with a precipitation estimation algorithm termed Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) to provide precipitation forecasts. The results of merged LSTM with PERSIANN are compared to the results of an Elman-type Recurrent Neural Network (RNN) merged with PERSIANN and Final Analysis of Global Forecast System model over the states of Oklahoma, Florida and Oregon. The performance of each model is investigated during 3 storm events each located over one of the study regions. The results indicate the outperformance of merged LSTM forecasts comparing to the numerical and statistical baselines in terms of Probability of Detection (POD), False Alarm Ratio (FAR), Critical Success Index (CSI), RMSE and correlation coefficient especially in convective systems. The proposed method shows superior capabilities in short-term forecasting over compared methods.

  9. Incorporating Satellite Precipitation Estimates into a Radar-Gauge Multi-Sensor Precipitation Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Yuxiang He

    2018-01-01

    Full Text Available This paper presents a new and enhanced fusion module for the Multi-Sensor Precipitation Estimator (MPE that would objectively blend real-time satellite quantitative precipitation estimates (SQPE with radar and gauge estimates. This module consists of a preprocessor that mitigates systematic bias in SQPE, and a two-way blending routine that statistically fuses adjusted SQPE with radar estimates. The preprocessor not only corrects systematic bias in SQPE, but also improves the spatial distribution of precipitation based on SQPE and makes it closely resemble that of radar-based observations. It uses a more sophisticated radar-satellite merging technique to blend preprocessed datasets, and provides a better overall QPE product. The performance of the new satellite-radar-gauge blending module is assessed using independent rain gauge data over a five-year period between 2003–2007, and the assessment evaluates the accuracy of newly developed satellite-radar-gauge (SRG blended products versus that of radar-gauge products (which represents MPE algorithm currently used in the NWS (National Weather Service operations over two regions: (I Inside radar effective coverage and (II immediately outside radar coverage. The outcomes of the evaluation indicate (a ingest of SQPE over areas within effective radar coverage improve the quality of QPE by mitigating the errors in radar estimates in region I; and (b blending of radar, gauge, and satellite estimates over region II leads to reduction of errors relative to bias-corrected SQPE. In addition, the new module alleviates the discontinuities along the boundaries of radar effective coverage otherwise seen when SQPE is used directly to fill the areas outside of effective radar coverage.

  10. Steps toward a CONUS-wide reanalysis with archived NEXRAD data using National Mosaic and Multisensor Quantitative Precipitation Estimation (NMQ/Q2) algorithms

    Science.gov (United States)

    Stevens, S. E.; Nelson, B. R.; Langston, C.; Qi, Y.

    2012-12-01

    The National Mosaic and Multisensor QPE (NMQ/Q2) software suite, developed at NOAA's National Severe Storms Laboratory (NSSL) in Norman, OK, addresses a large deficiency in the resolution of currently archived precipitation datasets. Current standards, both radar- and satellite-based, provide for nationwide precipitation data with a spatial resolution of up to 4-5 km, with a temporal resolution as fine as one hour. Efforts are ongoing to process archived NEXRAD data for the period of record (1996 - present), producing a continuous dataset providing precipitation data at a spatial resolution of 1 km, on a timescale of only five minutes. In addition, radar-derived precipitation data are adjusted hourly using a wide variety of automated gauge networks spanning the United States. Applications for such a product range widely, from emergency management and flash flood guidance, to hydrological studies and drought monitoring. Results are presented from a subset of the NEXRAD dataset, providing basic statistics on the distribution of rainrates, relative frequency of precipitation types, and several other variables which demonstrate the variety of output provided by the software. Precipitation data from select case studies are also presented to highlight the increased resolution provided by this reanalysis and the possibilities that arise from the availability of data on such fine scales. A previously completed pilot project and steps toward a nationwide implementation are presented along with proposed strategies for managing and processing such a large dataset. Reprocessing efforts span several institutions in both North Carolina and Oklahoma, and data/software coordination are key in producing a homogeneous record of precipitation to be archived alongside NOAA's other Climate Data Records. Methods are presented for utilizing supercomputing capability in expediting processing, to allow for the iterative nature of a reanalysis effort.

  11. Satellite precipitation estimation over the Tibetan Plateau

    Science.gov (United States)

    Porcu, F.; Gjoka, U.

    2012-04-01

    Precipitation characteristics over the Tibetan Plateau are very little known, given the scarcity of reliable and widely distributed ground observation, thus the satellite approach is a valuable choice for large scale precipitation analysis and hydrological cycle studies. However,the satellite perspective undergoes various shortcomings at the different wavelengths used in atmospheric remote sensing. In the microwave spectrum often the high soil emissivity masks or hides the atmospheric signal upwelling from light-moderate precipitation layers, while low and relatively thin precipitating clouds are not well detected in the visible-infrared, because of their low contrast with cold and bright (if snow covered) background. In this work an IR-based, statistical rainfall estimation technique is trained and applied over the Tibetan Plateau hydrological basin to retrive precipitation intensity at different spatial and temporal scales. The technique is based on a simple artificial neural network scheme trained with two supervised training sets assembled for monsoon season and for the rest of the year. For the monsoon season (estimated from June to September), the ground radar precipitation data for few case studies are used to build the training set: four days in summer 2009 are considered. For the rest of the year, CloudSat-CPR derived snowfall rate has been used as reference precipitation data, following the Kulie and Bennartz (2009) algorithm. METEOSAT-7 infrared channels radiance (at 6.7 and 11 micometers) and derived local variability features (such as local standard deviation and local average) are used as input and the actual rainrate is obtained as output for each satellite slot, every 30 minutes on the satellite grid. The satellite rainrate maps for three years (2008-2010) are computed and compared with available global precipitation products (such as C-MORPH and TMPA products) and with other techniques applied to the Plateau area: similarities and differences are

  12. Contribution of long-term accounting for raindrop size distribution variations on quantitative precipitation estimation by weather radar: Disdrometers vs parameter optimization

    Science.gov (United States)

    Hazenberg, P.; Uijlenhoet, R.; Leijnse, H.

    2015-12-01

    Volumetric weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources, which can be subdivided into two main groups: 1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, vertical profile of reflectivity, attenuation, etc.), and 2) errors related to the conversion of the observed reflectivity (Z) values into rainfall intensity (R) and specific attenuation (k). Until the recent wide-scale implementation of dual-polarimetric radar, this second group of errors received relatively little attention, focusing predominantly on precipitation type-dependent Z-R and Z-k relations. The current work accounts for the impact of variations of the drop size distribution (DSD) on the radar QPE performance. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed within The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. However, overall precipitation intensities are still underestimated. This underestimation is expected to result from unaccounted errors (e.g. transmitter calibration, erroneous identification of precipitation as clutter, overshooting and small-scale variability). In case the DSD parameters are optimized, the performance of the radar is further improved, resulting in the best performance of the radar QPE product. However

  13. Augmenting Satellite Precipitation Estimation with Lightning Information

    Energy Technology Data Exchange (ETDEWEB)

    Mahrooghy, Majid [Mississippi State University (MSU); Anantharaj, Valentine G [ORNL; Younan, Nicolas H. [Mississippi State University (MSU); Petersen, Walter A. [NASA Marshall Space Flight Center, Huntsville, AL; Hsu, Kuo-Lin [University of California, Irvine; Behrangi, Ali [Jet Propulsion Laboratory, Pasadena, CA; Aanstoos, James [Mississippi State University (MSU)

    2013-01-01

    We have used lightning information to augment the Precipitation Estimation from Remotely Sensed Imagery using an Artificial Neural Network - Cloud Classification System (PERSIANN-CCS). Co-located lightning data are used to segregate cloud patches, segmented from GOES-12 infrared data, into either electrified (EL) or non-electrified (NEL) patches. A set of features is extracted separately for the EL and NEL cloud patches. The features for the EL cloud patches include new features based on the lightning information. The cloud patches are classified and clustered using self-organizing maps (SOM). Then brightness temperature and rain rate (T-R) relationships are derived for the different clusters. Rain rates are estimated for the cloud patches based on their representative T-R relationship. The Equitable Threat Score (ETS) for daily precipitation estimates is improved by almost 12% for the winter season. In the summer, no significant improvements in ETS are noted.

  14. Validating quantitative precipitation forecast for the Flood ...

    Indian Academy of Sciences (India)

    In order to issue an accurate warning for flood, a better or appropriate quantitative forecasting of precipitationis required. In view of this, the present study intends to validate the quantitative precipitationforecast (QPF) issued during southwest monsoon season for six river catchments (basin) under theflood meteorological ...

  15. Improving precipitation estimates over the western United States using GOES-R precipitation data

    Science.gov (United States)

    Karbalaee, N.; Kirstetter, P. E.; Gourley, J. J.

    2017-12-01

    Satellite remote sensing data with fine spatial and temporal resolution are widely used for precipitation estimation for different applications such as hydrological modeling, storm prediction, and flash flood monitoring. The Geostationary Operational Environmental Satellites-R series (GOES-R) is the next generation of environmental satellites that provides hydrologic, atmospheric, and climatic information every 30 seconds over the western hemisphere. The high-resolution and low-latency of GOES-R observations is essential for the monitoring and prediction of floods, specifically in the Western United States where the vantage point of space can complement the degraded weather radar coverage of the NEXRAD network. The GOES-R rainfall rate algorithm will yield deterministic quantitative precipitation estimates (QPE). Accounting for inherent uncertainties will further advance the GOES-R QPEs since with quantifiable error bars, the rainfall estimates can be more readily fused with ground radar products. On the ground, the high-resolution NEXRAD-based precipitation estimation from the Multi-Radar/Multi-Sensor (MRMS) system, which is now operational in the National Weather Service (NWS), is challenged due to a lack of suitable coverage of operational weather radars over complex terrain. Distribution of QPE uncertainties associated with the GOES-R deterministic retrievals are derived and analyzed using MRMS over regions with good radar coverage. They will be merged with MRMS-based probabilistic QPEs developed to advance multisensor QPE integration. This research aims at improving precipitation estimation over the CONUS by combining the observations from GOES-R and MRMS to provide consistent, accurate and fine resolution precipitation rates with uncertainties over the CONUS.

  16. Bayesian quantitative precipitation forecasts in terms of quantiles

    Science.gov (United States)

    Bentzien, Sabrina; Friederichs, Petra

    2014-05-01

    Ensemble prediction systems (EPS) for numerical weather predictions on the mesoscale are particularly developed to obtain probabilistic guidance for high impact weather. An EPS not only issues a deterministic future state of the atmosphere but a sample of possible future states. Ensemble postprocessing then translates such a sample of forecasts into probabilistic measures. This study focus on probabilistic quantitative precipitation forecasts in terms of quantiles. Quantiles are particular suitable to describe precipitation at various locations, since no assumption is required on the distribution of precipitation. The focus is on the prediction during high-impact events and related to the Volkswagen Stiftung funded project WEX-MOP (Mesoscale Weather Extremes - Theory, Spatial Modeling and Prediction). Quantile forecasts are derived from the raw ensemble and via quantile regression. Neighborhood method and time-lagging are effective tools to inexpensively increase the ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Since an EPS provides a large amount of potentially informative predictors, a variable selection is required in order to obtain a stable statistical model. A Bayesian formulation of quantile regression allows for inference about the selection of predictive covariates by the use of appropriate prior distributions. Moreover, the implementation of an additional process layer for the regression parameters accounts for spatial variations of the parameters. Bayesian quantile regression and its spatially adaptive extension is illustrated for the German-focused mesoscale weather prediction ensemble COSMO-DE-EPS, which runs (pre)operationally since December 2010 at the German Meteorological Service (DWD). Objective out-of-sample verification uses the quantile score (QS), a weighted absolute error between quantile forecasts and observations. The QS is a proper scoring function and can be decomposed into

  17. Connecting Satellite-Based Precipitation Estimates to Users

    Science.gov (United States)

    Huffman, George J.; Bolvin, David T.; Nelkin, Eric

    2018-01-01

    Beginning in 1997, the Merged Precipitation Group at NASA Goddard has distributed gridded global precipitation products built by combining satellite and surface gauge data. This started with the Global Precipitation Climatology Project (GPCP), then the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA), and recently the Integrated Multi-satellitE Retrievals for the Global Precipitation Measurement (GPM) mission (IMERG). This 20+-year (and on-going) activity has yielded an important set of insights and lessons learned for making state-of-the-art precipitation data accessible to the diverse communities of users. Merged-data products critically depend on the input sensors and the retrieval algorithms providing accurate, reliable estimates, but it is also important to provide ancillary information that helps users determine suitability for their application. We typically provide fields of estimated random error, and recently reintroduced the quality index concept at user request. Also at user request we have added a (diagnostic) field of estimated precipitation phase. Over time, increasingly more ancillary fields have been introduced for intermediate products that give expert users insight into the detailed performance of the combination algorithm, such as individual merged microwave and microwave-calibrated infrared estimates, the contributing microwave sensor types, and the relative influence of the infrared estimate.

  18. Opportunities and challenges for evaluating precipitation estimates during GPM mission

    Energy Technology Data Exchange (ETDEWEB)

    Amitai, E. [George Mason Univ. and NASA Goddard Space Flight Center, Greenbelt, MD (United States); NASA Goddard Space Flight Center, Greenbelt, MD (United States); Llort, X.; Sempere-Torres, D. [GRAHI/Univ. Politecnica de Catalunya, Barcelona (Spain)

    2006-10-15

    Data assimilation in conjunction with numerical weather prediction and a variety of hydrologic applications now depend on satellite observations of precipitation. However, providing values of precipitation is not sufficient unless they are accompanied by the associated uncertainty estimates. The main approach of quantifying satellite precipitation uncertainties generally requires establishment of reliable uncertainty estimates for the ground validation rainfall products. This paper discusses several of the relevant validation concepts evolving from the tropical rainfall measuring mission (TRMM) era to the global precipitation measurement mission (GPM) era in the context of determining and reducing uncertainties of ground and space-based radar rainfall estimates. From comparisons of probability distribution functions of rain rates derived from TRMM precipitation radar and co-located ground based radar data - using the new NASA TRMM radar rainfall products (version 6) - this paper provides (1) a brief review of the importance of comparing pdfs of rain rate for statistical and physical verification of space-borne radar estimates of precipitation; (2) a brief review of how well the ground validation estimates compare to the TRMM radar retrieved estimates; and (3) discussion on opportunities and challenges to determine and reduce the uncertainties in space-based and ground-based radar estimates of rain rate distributions. (orig.)

  19. Comparing NEXRAD Operational Precipitation Estimates and Raingage Observations of Intense Precipitation in the Missouri River Basin.

    Science.gov (United States)

    Young, C. B.

    2002-05-01

    Accurate observation of precipitation is critical to the study and modeling of land surface hydrologic processes. NEXRAD radar-based precipitation estimates are increasingly used in field experiments, hydrologic modeling, and water and energy budget studies due to their high spatial and temporal resolution, national coverage, and perceived accuracy. Extensive development and testing of NEXRAD precipitation algorithms have been carried out in the Southern Plains. Previous studies (Young et al. 2000, Young et al. 1999, Smith et al. 1996) indicate that NEXRAD operational products tend to underestimate precipitation at light rain rates. This study investigates the performance of NEXRAD precipitation estimates of high-intensity rainfall, focusing on flood-producing storms in the Missouri River Basin. NEXRAD estimates for these storms are compared with data from multiple raingage networks, including NWS recording and non-recording gages and ALERT raingage data for the Kansas City metropolitan area. Analyses include comparisons of gage and radar data at a wide range of temporal and spatial scales. Particular attention is paid to the October 4th, 1998, storm that produced severe flooding in Kansas City. NOTE: The phrase `NEXRAD operational products' in this abstract includes precipitation estimates generated using the Stage III and P1 algorithms. Both of these products estimate hourly accumulations on the (approximately) 4 km HRAP grid.

  20. Wavelet-based verification of the quantitative precipitation forecast

    Science.gov (United States)

    Yano, Jun-Ichi; Jakubiak, Bogumil

    2016-06-01

    This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.

  1. A quantitative comparison of lightning-induced electron precipitation and VLF signal perturbations

    Science.gov (United States)

    Peter, W. B.; Inan, U. S.

    2007-12-01

    VLF signal perturbations recorded on the Holographic Array for Ionospheric/Lightning Research (HAIL) are quantitatively related to a comprehensive model of lightning-induced electron precipitation (LEP) events. The model consists of three major components: a test-particle model of gyroresonant whistler-induced electron precipitation, a Monte Carlo simulation of energy deposition into the ionosphere, and a model of VLF subionospheric signal propagation. For the two representative LEP events studied, the model calculates peak VLF amplitude perturbations within a factor of three of those observed, well within the expected variability of radiation belt flux levels. The phase response of the observed VLF signal to precipitation varied dramatically over the course of the two nights and this variability in phase response is not properly reproduced by the model. The model calculates a peak in the precipitation that is poleward displaced ~6° from the causative lightning flash, consistent with observations. The modeled precipitated energy flux (E > 45 keV) peaks at ~1 × 10-2 (ergs s-1 cm-2), resulting in a peak loss of ~0.001% from a single flux tube at L ~ 2.2, consistent with previous satellite measurements of LEP events. The precipitation calculated by the model is highly dependent on the near-loss-cone trapped radiation belt flux levels assumed, and hence our main objective is not to compare the model calculations and the VLF signal observations on an absolute basis but is rather to develop metrics with which we can characterize the VLF signal perturbations recorded on HAIL in terms of the associated precipitation flux. Metrics quantifying the ionospheric density enhancement (N ILDE) and the electron precipitation (Γ) along a VLF signal path are strongly correlated with the VLF signal perturbations calculated by the model. A conversion ratio Ψ, relating VLF signal amplitude perturbations (ΔA) to the time-integrated precipitation (100-300 keV) along the VLF path (

  2. Quantitative measurement of lightning-induced electron precipitation using VLF remote sensing

    Science.gov (United States)

    Peter, William Bolton

    This dissertation examines the detection of lightning-induced energetic electron precipitation via subionospheric Very Low Frequency (VLF) remote sensing. The primary measurement tool used is a distributed set of VLF observing sites, the Holographic Array for Ionospheric/Lightning Research (HAIL), located along the eastern side of the Rocky Mountains in the Central United States. Measurements of the VLF signal perturbations indicate that 90% of the precipitation occurs over a region ˜8 degrees in latitudinal extent, with the peak of the precipitation poleward displaced ˜7 degrees from the causative discharge. A comparison of the VLF signal perturbations recorded on the HAIL array with a comprehensive model of LEP events allows for the quantitative measurement of electron precipitation and ionospheric density enhancement with unprecedented quantitative detail. The model consists of three major components: a test-particle model of gyroresonant whistler-induced electron precipitation; a Monte Carlo simulation of energy deposition into the ionosphere; and a model of VLF subionospheric signal propagation. For the two representative LEP events studied, the model calculates peak VLF amplitude and phase perturbations within a factor of three of those observed, well within the expected variability of radiation belt flux levels. The modeled precipitated energy flux (E>45 keV) peaks at ˜1 x 10-2 [ergs s-1 cm -2], resulting in a peak loss of ˜0.001% from a single flux tube at L˜2.2, consistent with previous satellite measurements of LEP events. Metrics quantifying the ionospheric density enhancement (N ILDE) and the electron precipitation (Gamma) are strongly correlated with the VLF signal perturbations calculated by the model. A conversion ratio Psi relates VLF signal amplitude perturbations (DeltaA) to the time-integrated precipitation (100-300 keV) along the VLF path (Psi=Gamma / DeltaA). The total precipitation (100-300 keV) induced by one of the representative LEP

  3. Estimation of precipitable water from surface dew point temperature

    International Nuclear Information System (INIS)

    Abdel Wahab, M.; Sharif, T.A.

    1991-09-01

    The Reitan (1963) regression equation which is of the form lnw=a+bT d has been examined and tested to estimate precipitable water content from surface dew point temperature at different locations. The study confirms that the slope of this equation (b) remains constant at the value of .0681 deg. C., while the intercept (a) changes rapidly with the latitude. The use of the variable intercept can improve the estimated result by 2%. (author). 6 refs, 4 figs, 3 tabs

  4. Assessment of satellite-based precipitation estimates over Paraguay

    Science.gov (United States)

    Oreggioni Weiberlen, Fiorella; Báez Benítez, Julián

    2018-04-01

    Satellite-based precipitation estimates represent a potential alternative source of input data in a plethora of meteorological and hydrological applications, especially in regions characterized by a low density of rain gauge stations. Paraguay provides a good example of a case where the use of satellite-based precipitation could be advantageous. This study aims to evaluate the version 7 of the Tropical Rainfall Measurement Mission Multi-Satellite Precipitation Analysis (TMPA V7; 3B42 V7) and the version 1.0 of the purely satellite-based product of the Climate Prediction Center Morphing Technique (CMORPH RAW) through their comparison with daily in situ precipitation measurements from 1998 to 2012 over Paraguay. The statistical assessment is conducted with several commonly used indexes. Specifically, to evaluate the accuracy of daily precipitation amounts, mean error (ME), root mean square error (RMSE), BIAS, and coefficient of determination (R 2) are used, and to analyze the capability to correctly detect different precipitation intensities, false alarm ratio (FAR), frequency bias index (FBI), and probability of detection (POD) are applied to various rainfall rates (0, 0.1, 0.5, 1, 2, 5, 10, 20, 40, 60, and 80 mm/day). Results indicate that TMPA V7 has a better performance than CMORPH RAW over Paraguay. TMPA V7 has higher accuracy in the estimation of daily rainfall volumes and greater precision in the detection of wet days (> 0 mm/day). However, both satellite products show a lower ability to appropriately detect high intensity precipitation events.

  5. Quantitative analysis of precipitation over Fukushima to understand the wet deposition process in March 2011

    Science.gov (United States)

    Yatagai, A.; Onda, Y.; Watanabe, A.

    2012-04-01

    The Great East Japan Earthquake caused a severe accident at the Fukushima-Daiichi nuclear power plant (NPP), leading to the emission of large amounts of radioactive pollutants into the environment. The transport and diffusion of these radioactive pollutants in the atmosphere caused a disaster for residents in and around Fukushima. Studies have sought to understand the transport, diffusion, and deposition process, and to understand the movement of radioactive pollutants through the soil, vegetation, rivers, and groundwater. However, a detailed simulation and understanding of the distribution of radioactive compounds depend on a simulation of precipitation and on the information on the timing of the emission of these radioactive pollutants from the NPP. Past nuclear expansion studies have demonstrated the importance of wet deposition in distributing pollutants. Hence, this study examined the quantitative precipitation pattern in March 2011 using rain-gauge observations and X-band radar data from Fukushima University. We used the AMeDAS rain-gauge network data of 1) the Japan Meteorological Agency (1273 stations in Japan) and 2) the Water Information System (47 stations in Fukushima prefecture) and 3) the rain-gauge data of the Environmental Information Network of NTT Docomo (30 stations in Fukushima) to construct 0.05-degree mesh data using the same method used to create the APHRODITE daily grid precipitation data (Yatagai et al., 2009). Since some AMeDAS data for the coastal region were lost due to the earthquake, the complementary network of 2) and 3) yielded better precipitation estimates. The data clarified that snowfall was observed on the night of Mar 15 into the morning of Mar 16 throughout Fukushima prefecture. This had an important effect on the radioactive contamination pattern in Fukushima prefecture. The precipitation pattern itself does not show one-on-one correspondence with the contamination pattern. While the pollutants transported northeast of the

  6. Pareto-Optimal Estimates of California Precipitation Change

    Science.gov (United States)

    Langenbrunner, Baird; Neelin, J. David

    2017-12-01

    In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.

  7. Pareto-optimal estimates that constrain mean California precipitation change

    Science.gov (United States)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  8. Estimation of the characteristic energy of electron precipitation

    Directory of Open Access Journals (Sweden)

    C. F. del Pozo

    2002-09-01

    Full Text Available Data from simultaneous observations (on 13 February 1996, 9 November 1998, and 12 February 1999 with the IRIS, DASI and EISCAT systems are employed in the study of the energy distribution of the electron precipitation during substorm activity. The estimation of the characteristic energy of the electron precipitation over the common field of view of IRIS and DASI is discussed. In particular, we look closely at the physical basis of the correspondence between the characteristic energy, the flux-averaged energy, as defined below, and the logarithm of the ratio of the green-light intensity to the square of absorption. This study expands and corrects results presented in the paper by Kosch et al. (2001. It is noticed, moreover, that acceleration associated with diffusion processes in the magnetosphere long before precipitation may be controlling the shape of the energy spectrum. We propose and test a "mixed" distribution for the energy-flux spectrum, exponential at the lower energies and Maxwellian or modified power-law at the higher energies, with a threshold energy separating these two regimes. The energy-flux spectrum at Tromsø, in the 1–320 keV range, is derived from EISCAT electron density profiles in the 70–140 km altitude range and is applied in the "calibration" of the optical intensity and absorption distributions, in order to extrapolate the flux and characteristic energy maps.Key words. Ionosphere (auroral ionosphere; particle precipitation; particle acceleration

  9. Estimation of the characteristic energy of electron precipitation

    Directory of Open Access Journals (Sweden)

    C. F. del Pozo

    Full Text Available Data from simultaneous observations (on 13 February 1996, 9 November 1998, and 12 February 1999 with the IRIS, DASI and EISCAT systems are employed in the study of the energy distribution of the electron precipitation during substorm activity. The estimation of the characteristic energy of the electron precipitation over the common field of view of IRIS and DASI is discussed. In particular, we look closely at the physical basis of the correspondence between the characteristic energy, the flux-averaged energy, as defined below, and the logarithm of the ratio of the green-light intensity to the square of absorption. This study expands and corrects results presented in the paper by Kosch et al. (2001. It is noticed, moreover, that acceleration associated with diffusion processes in the magnetosphere long before precipitation may be controlling the shape of the energy spectrum. We propose and test a "mixed" distribution for the energy-flux spectrum, exponential at the lower energies and Maxwellian or modified power-law at the higher energies, with a threshold energy separating these two regimes. The energy-flux spectrum at Tromsø, in the 1–320 keV range, is derived from EISCAT electron density profiles in the 70–140 km altitude range and is applied in the "calibration" of the optical intensity and absorption distributions, in order to extrapolate the flux and characteristic energy maps.

    Key words. Ionosphere (auroral ionosphere; particle precipitation; particle acceleration

  10. Uncertainty Estimation using Bootstrapped Kriging Predictions for Precipitation Isoscapes

    Science.gov (United States)

    Ma, C.; Bowen, G. J.; Vander Zanden, H.; Wunder, M.

    2017-12-01

    Isoscapes are spatial models representing the distribution of stable isotope values across landscapes. Isoscapes of hydrogen and oxygen in precipitation are now widely used in a diversity of fields, including geology, biology, hydrology, and atmospheric science. To generate isoscapes, geostatistical methods are typically applied to extend predictions from limited data measurements. Kriging is a popular method in isoscape modeling, but quantifying the uncertainty associated with the resulting isoscapes is challenging. Applications that use precipitation isoscapes to determine sample origin require estimation of uncertainty. Here we present a simple bootstrap method (SBM) to estimate the mean and uncertainty of the krigged isoscape and compare these results with a generalized bootstrap method (GBM) applied in previous studies. We used hydrogen isotopic data from IsoMAP to explore these two approaches for estimating uncertainty. We conducted 10 simulations for each bootstrap method and found that SBM results in more kriging predictions (9/10) compared to GBM (4/10). Prediction from SBM was closer to the original prediction generated without bootstrapping and had less variance than GBM. SBM was tested on different datasets from IsoMAP with different numbers of observation sites. We determined that predictions from the datasets with fewer than 40 observation sites using SBM were more variable than the original prediction. The approaches we used for estimating uncertainty will be compiled in an R package that is under development. We expect that these robust estimates of precipitation isoscape uncertainty can be applied in diagnosing the origin of samples ranging from various type of waters to migratory animals, food products, and humans.

  11. The estimation of probable maximum precipitation: the case of Catalonia.

    Science.gov (United States)

    Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel

    2008-12-01

    A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.

  12. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  13. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  14. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  15. GPS Estimates of Integrated Precipitable Water Aid Weather Forecasters

    Science.gov (United States)

    Moore, Angelyn W.; Gutman, Seth I.; Holub, Kirk; Bock, Yehuda; Danielson, David; Laber, Jayme; Small, Ivory

    2013-01-01

    Global Positioning System (GPS) meteorology provides enhanced density, low-latency (30-min resolution), integrated precipitable water (IPW) estimates to NOAA NWS (National Oceanic and Atmospheric Adminis tration Nat ional Weather Service) Weather Forecast Offices (WFOs) to provide improved model and satellite data verification capability and more accurate forecasts of extreme weather such as flooding. An early activity of this project was to increase the number of stations contributing to the NOAA Earth System Research Laboratory (ESRL) GPS meteorology observing network in Southern California by about 27 stations. Following this, the Los Angeles/Oxnard and San Diego WFOs began using the enhanced GPS-based IPW measurements provided by ESRL in the 2012 and 2013 monsoon seasons. Forecasters found GPS IPW to be an effective tool in evaluating model performance, and in monitoring monsoon development between weather model runs for improved flood forecasting. GPS stations are multi-purpose, and routine processing for position solutions also yields estimates of tropospheric zenith delays, which can be converted into mm-accuracy PWV (precipitable water vapor) using in situ pressure and temperature measurements, the basis for GPS meteorology. NOAA ESRL has implemented this concept with a nationwide distribution of more than 300 "GPSMet" stations providing IPW estimates at sub-hourly resolution currently used in operational weather models in the U.S.

  16. Antecedent precipitation index determined from CST estimates of rainfall

    Science.gov (United States)

    Martin, David W.

    1992-01-01

    This paper deals with an experimental calculation of a satellite-based antecedent precipitation index (API). The index is also derived from daily rain images produced from infrared images using an improved version of GSFC's Convective/Stratiform Technique (CST). API is a measure of soil moisture, and is based on the notion that the amount of moisture in the soil at a given time is related to precipitation at earlier times. Four different CST programs as well as the Geostationary Operational Enviroment Satellite (GOES) Precipitation Index developed by Arkin in 1979 are compared to experimental results, for the Mississippi Valley during the month of July. Rain images are shown for the best CST code and the ARK program. Comparisons are made as to the accuracy and detail of the results for the two codes. This project demonstrates the feasibility of running the CST on a synoptic scale. The Mississippi Valley case is well suited for testing the feasibility of monitoring soil moisture by means of CST. Preliminary comparisons of CST and ARK indicate significant differences in estimates of rain amount and distribution.

  17. A quantitative phase field model for hydride precipitation in zirconium alloys: Part I. Development of quantitative free energy functional

    International Nuclear Information System (INIS)

    Shi, San-Qiang; Xiao, Zhihua

    2015-01-01

    A temperature dependent, quantitative free energy functional was developed for the modeling of hydride precipitation in zirconium alloys within a phase field scheme. The model takes into account crystallographic variants of hydrides, interfacial energy between hydride and matrix, interfacial energy between hydrides, elastoplastic hydride precipitation and interaction with externally applied stress. The model is fully quantitative in real time and real length scale, and simulation results were compared with limited experimental data available in the literature with a reasonable agreement. The work calls for experimental and/or theoretical investigations of some of the key material properties that are not yet available in the literature

  18. Global Precipitation Measurement (GPM) Core Observatory Falling Snow Estimates

    Science.gov (United States)

    Skofronick Jackson, G.; Kulie, M.; Milani, L.; Munchak, S. J.; Wood, N.; Levizzani, V.

    2017-12-01

    Retrievals of falling snow from space represent an important data set for understanding and linking the Earth's atmospheric, hydrological, and energy cycles. Estimates of falling snow must be captured to obtain the true global precipitation water cycle, snowfall accumulations are required for hydrological studies, and without knowledge of the frozen particles in clouds one cannot adequately understand the energy and radiation budgets. This work focuses on comparing the first stable falling snow retrieval products (released May 2017) for the Global Precipitation Measurement (GPM) Core Observatory (GPM-CO), which was launched February 2014, and carries both an active dual frequency (Ku- and Ka-band) precipitation radar (DPR) and a passive microwave radiometer (GPM Microwave Imager-GMI). Five separate GPM-CO falling snow retrieval algorithm products are analyzed including those from DPR Matched (Ka+Ku) Scan, DPR Normal Scan (Ku), DPR High Sensitivity Scan (Ka), combined DPR+GMI, and GMI. While satellite-based remote sensing provides global coverage of falling snow events, the science is relatively new, the different on-orbit instruments don't capture all snow rates equally, and retrieval algorithms differ. Thus a detailed comparison among the GPM-CO products elucidates advantages and disadvantages of the retrievals. GPM and CloudSat global snowfall evaluation exercises are natural investigative pathways to explore, but caution must be undertaken when analyzing these datasets for comparative purposes. This work includes outlining the challenges associated with comparing GPM-CO to CloudSat satellite snow estimates due to the different sampling, algorithms, and instrument capabilities. We will highlight some factors and assumptions that can be altered or statistically normalized and applied in an effort to make comparisons between GPM and CloudSat global satellite falling snow products as equitable as possible.

  19. Recent Progress on the Second Generation CMORPH: LEO-IR Based Precipitation Estimates and Cloud Motion Vector

    Science.gov (United States)

    Xie, Pingping; Joyce, Robert; Wu, Shaorong

    2015-04-01

    As reported at the EGU General Assembly of 2014, a prototype system was developed for the second generation CMORPH to produce global analyses of 30-min precipitation on a 0.05olat/lon grid over the entire globe from pole to pole through integration of information from satellite observations as well as numerical model simulations. The second generation CMORPH is built upon the Kalman Filter based CMORPH algorithm of Joyce and Xie (2011). Inputs to the system include rainfall and snowfall rate retrievals from passive microwave (PMW) measurements aboard all available low earth orbit (LEO) satellites, precipitation estimates derived from infrared (IR) observations of geostationary (GEO) as well as LEO platforms, and precipitation simulations from numerical global models. Key to the success of the 2nd generation CMORPH, among a couple of other elements, are the development of a LEO-IR based precipitation estimation to fill in the polar gaps and objectively analyzed cloud motion vectors to capture the cloud movements of various spatial scales over the entire globe. In this presentation, we report our recent work on the refinement for these two important algorithm components. The prototype algorithm for the LEO IR precipitation estimation is refined to achieve improved quantitative accuracy and consistency with PMW retrievals. AVHRR IR TBB data from all LEO satellites are first remapped to a 0.05olat/lon grid over the entire globe and in a 30-min interval. Temporally and spatially co-located data pairs of the LEO TBB and inter-calibrated combined satellite PMW retrievals (MWCOMB) are then collected to construct tables. Precipitation at a grid box is derived from the TBB through matching the PDF tables for the TBB and the MWCOMB. This procedure is implemented for different season, latitude band and underlying surface types to account for the variations in the cloud - precipitation relationship. At the meantime, a sub-system is developed to construct analyzed fields of

  20. Combining Radar and Daily Precipitation Data to Estimate Meaningful Sub-daily Precipitation Extremes

    Science.gov (United States)

    Pegram, G. G. S.; Bardossy, A.

    2016-12-01

    Short duration extreme rainfalls are important for design. The purpose of this presentation is not to improve the day by day estimation of precipitation, but to obtain reasonable statistics for the subdaily extremes at gauge locations. We are interested specifically in daily and sub-daily extreme values of precipitation at gauge locations. We do not employ the common procedure of using time series of control station to determine the missing data values in a target. We are interested in individual rare events, not sequences. The idea is to use radar to disaggregate daily totals to sub-daily amounts. In South Arica, an S-band radar operated relatively continuously at Bethlehem from 1998 to 2003, whose scan at 1.5 km above ground [CAPPI] overlapped a dense (10 km spacing) set of 45 pluviometers recording in the same 6-year period. Using this valuable set of data, we are only interested in rare extremes, therefore small to medium values of rainfall depth were neglected, leaving 12 days of ranked daily maxima in each set per year, whose sum typically comprised about 50% of each annual rainfall total. The method presented here uses radar for disaggregating daily gauge totals in subdaily intervals down to 15 minutes in order to extract the maxima of sub-hourly through to daily rainfall at each of 37 selected radar pixels [1 km square in plan] which contained one of the 45 pluviometers not masked out by the radar foot-print. The pluviometer data were aggregated to daily totals, to act as if they were daily read gauges; their only other task was to help in the cross-validation exercise. The extrema were obtained as quantiles by ordering the 12 daily maxima of each interval per year. The unusual and novel goal was not to obtain the reproduction of the precipitation matching in space and time, but to obtain frequency distributions of the gauge and radar extremes, by matching their ranks, which we found to be stable and meaningful in cross-validation tests. We provide and

  1. Downsizing a long-term precipitation network: Using a quantitative approach to inform difficult decisions

    Science.gov (United States)

    Mark B. Green; John L. Campbell; Ruth D. Yanai; Scott W. Bailey; Amey S. Bailey; Nicholas Grant; Ian Halm; Eric P. Kelsey; Lindsey E. Rustad

    2018-01-01

    The design of a precipitation monitoring network must balance the demand for accurate estimates with the resources needed to build and maintain the network. If there are changes in the objectives of the monitoring or the availability of resources, network designs should be adjusted. At the Hubbard Brook Experimental Forest in New Hampshire, USA, precipitation has been...

  2. Similarities and Improvements of GPM Dual-Frequency Precipitation Radar (DPR upon TRMM Precipitation Radar (PR in Global Precipitation Rate Estimation, Type Classification and Vertical Profiling

    Directory of Open Access Journals (Sweden)

    Jinyu Gao

    2017-11-01

    Full Text Available Spaceborne precipitation radars are powerful tools used to acquire adequate and high-quality precipitation estimates with high spatial resolution for a variety of applications in hydrological research. The Global Precipitation Measurement (GPM mission, which deployed the first spaceborne Ka- and Ku-dual frequency radar (DPR, was launched in February 2014 as the upgraded successor of the Tropical Rainfall Measuring Mission (TRMM. This study matches the swath data of TRMM PR and GPM DPR Level 2 products during their overlapping periods at the global scale to investigate their similarities and DPR’s improvements concerning precipitation amount estimation and type classification of GPM DPR over TRMM PR. Results show that PR and DPR agree very well with each other in the global distribution of precipitation, while DPR improves the detectability of precipitation events significantly, particularly for light precipitation. The occurrences of total precipitation and the light precipitation (rain rates < 1 mm/h detected by GPM DPR are ~1.7 and ~2.53 times more than that of PR. With regard to type classification, the dual-frequency (Ka/Ku and single frequency (Ku methods performed similarly. In both inner (the central 25 beams and outer swaths (1–12 beams and 38–49 beams of DPR, the results are consistent. GPM DPR improves precipitation type classification remarkably, reducing the misclassification of clouds and noise signals as precipitation type “other” from 10.14% of TRMM PR to 0.5%. Generally, GPM DPR exhibits the same type division for around 82.89% (71.02% of stratiform (convective precipitation events recognized by TRMM PR. With regard to the freezing level height and bright band (BB height, both radars correspond with each other very well, contributing to the consistency in stratiform precipitation classification. Both heights show clear latitudinal dependence. Results in this study shall contribute to future development of spaceborne

  3. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    Science.gov (United States)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially

  4. 3800 Years of Quantitative Precipitation Reconstruction from the Northwest Yucatan Peninsula

    Science.gov (United States)

    Carrillo-Bastos, Alicia; Islebe, Gerald A.; Torrescano-Valle, Nuria

    2013-01-01

    Precipitation over the last 3800 years has been reconstructed using modern pollen calibration and precipitation data. A transfer function was then performed via the linear method of partial least squares. By calculating precipitation anomalies, it is estimated that precipitation deficits were greater than surpluses, reaching 21% and <9%, respectively. The period from 50 BC to 800 AD was the driest of the record. The drought related to the abandonment of the Maya Preclassic period featured a 21% reduction in precipitation, while the drought of the Maya collapse (800 to 860 AD) featured a reduction of 18%. The Medieval Climatic Anomaly was a period of positive phases (3.8–7.6%). The Little Ice Age was a period of climatic variability, with reductions in precipitation but without deficits. PMID:24391940

  5. Evaluation of precipitation estimates over CONUS derived from satellite, radar, and rain gauge datasets (2002-2012)

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.

    2014-10-01

    We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, and surface observations to derive precipitation characteristics over CONUS for the period 2002-2012. This comparison effort includes satellite multi-sensor datasets (bias-adjusted TMPA 3B42, near-real time 3B42RT), radar estimates (NCEP Stage IV), and rain gauge observations. Remotely sensed precipitation datasets are compared with surface observations from the Global Historical Climatology Network (GHCN-Daily) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model). The comparisons are performed at the annual, seasonal, and daily scales over the River Forecast Centers (RFCs) for CONUS. Annual average rain rates present a satisfying agreement with GHCN-D for all products over CONUS (± 6%). However, differences at the RFC are more important in particular for near-real time 3B42RT precipitation estimates (-33 to +49%). At annual and seasonal scales, the bias-adjusted 3B42 presented important improvement when compared to its near real time counterpart 3B42RT. However, large biases remained for 3B42 over the Western US for higher average accumulation (≥ 5 mm day-1) with respect to GHCN-D surface observations. At the daily scale, 3B42RT performed poorly in capturing extreme daily precipitation (> 4 in day-1) over the Northwest. Furthermore, the conditional analysis and the contingency analysis conducted illustrated the challenge of retrieving extreme precipitation from remote sensing estimates.

  6. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  7. Global Precipitation Measurement. Report 7; Bridging from TRMM to GPM to 3-Hourly Precipitation Estimates

    Science.gov (United States)

    Shepherd, J. Marshall; Smith, Eric A.; Adams, W. James (Editor)

    2002-01-01

    Historically, multi-decadal measurements of precipitation from surface-based rain gauges have been available over continents. However oceans remained largely unobserved prior to the beginning of the satellite era. Only after the launch of the first Defense Meteorological Satellite Program (DMSP) satellite in 1987 carrying a well-calibrated and multi-frequency passive microwave radiometer called Special Sensor Microwave/Imager (SSM/I) have systematic and accurate precipitation measurements over oceans become available on a regular basis; see Smith et al. (1994, 1998). Recognizing that satellite-based data are a foremost tool for measuring precipitation, NASA initiated a new research program to measure precipitation from space under its Mission to Planet Earth program in the 1990s. As a result, the Tropical Rainfall Measuring Mission (TRMM), a collaborative mission between NASA and NASDA, was launched in 1997 to measure tropical and subtropical rain. See Simpson et al. (1996) and Kummerow et al. (2000). Motivated by the success of TRMM, and recognizing the need for more comprehensive global precipitation measurements, NASA and NASDA have now planned a new mission, i.e., the Global Precipitation Measurement (GPM) mission. The primary goal of GPM is to extend TRMM's rainfall time series while making substantial improvements in precipitation observations, specifically in terms of measurement accuracy, sampling frequency, Earth coverage, and spatial resolution. This report addresses four fundamental questions related to the transition from current to future global precipitation observations as denoted by the TRMM and GPM eras, respectively.

  8. Downsizing a long-term precipitation network: Using a quantitative approach to inform difficult decisions.

    Science.gov (United States)

    Green, Mark B; Campbell, John L; Yanai, Ruth D; Bailey, Scott W; Bailey, Amey S; Grant, Nicholas; Halm, Ian; Kelsey, Eric P; Rustad, Lindsey E

    2018-01-01

    The design of a precipitation monitoring network must balance the demand for accurate estimates with the resources needed to build and maintain the network. If there are changes in the objectives of the monitoring or the availability of resources, network designs should be adjusted. At the Hubbard Brook Experimental Forest in New Hampshire, USA, precipitation has been monitored with a network established in 1955 that has grown to 23 gauges distributed across nine small catchments. This high sampling intensity allowed us to simulate reduced sampling schemes and thereby evaluate the effect of decommissioning gauges on the quality of precipitation estimates. We considered all possible scenarios of sampling intensity for the catchments on the south-facing slope (2047 combinations) and the north-facing slope (4095 combinations), from the current scenario with 11 or 12 gauges to only 1 gauge remaining. Gauge scenarios differed by as much as 6.0% from the best estimate (based on all the gauges), depending on the catchment, but 95% of the scenarios gave estimates within 2% of the long-term average annual precipitation. The insensitivity of precipitation estimates and the catchment fluxes that depend on them under many reduced monitoring scenarios allowed us to base our reduction decision on other factors such as technician safety, the time required for monitoring, and co-location with other hydrometeorological measurements (snow, air temperature). At Hubbard Brook, precipitation gauges could be reduced from 23 to 10 with a change of <2% in the long-term precipitation estimates. The decision-making approach illustrated in this case study is applicable to the redesign of monitoring networks when reduction of effort seems warranted.

  9. Enhancement of regional wet deposition estimates based on modeled precipitation inputs

    Science.gov (United States)

    James A. Lynch; Jeffery W. Grimm; Edward S. Corbett

    1996-01-01

    Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....

  10. Comparison of direct and precipitation methods for the estimation of ...

    African Journals Online (AJOL)

    Background: There is increase in use of direct assays for analysis of high and low density lipoprotein cholesterol by clinical laboratories despite differences in performance characteristics with conventional precipitation methods. Calculation of low density lipoprotein cholesterol in precipitation methods is based on total ...

  11. Evaluating the MSG satellite Multi-Sensor Precipitation Estimate for extreme rainfall monitoring over northern Tunisia

    Directory of Open Access Journals (Sweden)

    Saoussen Dhib

    2017-06-01

    Full Text Available Knowledge and evaluation of extreme precipitation is important for water resources and flood risk management, soil and land degradation, and other environmental issues. Due to the high potential threat to local infrastructure, such as buildings, roads and power supplies, heavy precipitation can have an important social and economic impact on society. At present, satellite derived precipitation estimates are becoming more readily available. This paper aims to investigate the potential use of the Meteosat Second Generation (MSG Multi-Sensor Precipitation Estimate (MPE for extreme rainfall assessment in Tunisia. The MSGMPE data combine microwave rain rate estimations with SEVIRI thermal infrared channel data, using an EUMETSAT production chain in near real time mode. The MPE data can therefore be used in a now-casting mode, and are potentially useful for extreme weather early warning and monitoring. Daily precipitation observed across an in situ gauge network in the north of Tunisia were used during the period 2007–2009 for validation of the MPE extreme event data. As a first test of the MSGMPE product's performance, very light to moderate rainfall classes, occurring between January and October 2007, were evaluated. Extreme rainfall events were then selected, using a threshold criterion for large rainfall depth (>50 mm/day occurring at least at one ground station. Spatial interpolation methods were applied to generate rainfall maps for the drier summer season (from May to October and the wet winter season (from November to April. Interpolated gauge rainfall maps were then compared to MSGMPE data available from the EUMETSAT UMARF archive or from the GEONETCast direct dissemination system. The summation of the MPE data at 5 and/or 15 min time intervals over a 24 h period, provided a basis for comparison. The MSGMPE product was not very effective in the detection of very light and light rain events. Better results were obtained for the slightly

  12. Precipitation Estimation Using Combined Radar/Radiometer Measurements Within the GPM Framework

    Science.gov (United States)

    Hou, Arthur

    2012-01-01

    The Global Precipitation Measurement (GPM) Mission is an international satellite mission specifically designed to unify and advance precipitation measurements from a constellation of research and operational microwave sensors. The GPM mission centers upon the deployment of a Core Observatory in a 65o non-Sun-synchronous orbit to serve as a physics observatory and a transfer standard for intersatellite calibration of constellation radiometers. The GPM Core Observatory will carry a Ku/Ka-band Dual-frequency Precipitation Radar (DPR) and a conical-scanning multi-channel (10-183 GHz) GPM Microwave Radiometer (GMI). The DPR will be the first dual-frequency radar in space to provide not only measurements of 3-D precipitation structures but also quantitative information on microphysical properties of precipitating particles needed for improving precipitation retrievals from microwave sensors. The DPR and GMI measurements will together provide a database that relates vertical hydrometeor profiles to multi-frequency microwave radiances over a variety of environmental conditions across the globe. This combined database will be used as a common transfer standard for improving the accuracy and consistency of precipitation retrievals from all constellation radiometers. For global coverage, GPM relies on existing satellite programs and new mission opportunities from a consortium of partners through bilateral agreements with either NASA or JAXA. Each constellation member may have its unique scientific or operational objectives but contributes microwave observations to GPM for the generation and dissemination of unified global precipitation data products. In addition to the DPR and GMI on the Core Observatory, the baseline GPM constellation consists of the following sensors: (1) Special Sensor Microwave Imager/Sounder (SSMIS) instruments on the U.S. Defense Meteorological Satellite Program (DMSP) satellites, (2) the Advanced Microwave Scanning Radiometer-2 (AMSR-2) on the GCOM-W1

  13. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  14. A spatial approach to the modelling and estimation of areal precipitation

    Energy Technology Data Exchange (ETDEWEB)

    Skaugen, T

    1996-12-31

    In hydroelectric power technology it is important that the mean precipitation that falls in an area can be calculated. This doctoral thesis studies how the morphology of rainfall, described by the spatial statistical parameters, can be used to improve interpolation and estimation procedures. It attempts to formulate a theory which includes the relations between the size of the catchment and the size of the precipitation events in the modelling of areal precipitation. The problem of estimating and modelling areal precipitation can be formulated as the problem of estimating an inhomogeneously distributed flux of a certain spatial extent being measured at points in a randomly placed domain. The information contained in the different morphology of precipitation types is used to improve estimation procedures of areal precipitation, by interpolation (kriging) or by constructing areal reduction factors. A new approach to precipitation modelling is introduced where the analysis of the spatial coverage of precipitation at different intensities plays a key role in the formulation of a stochastic model for extreme areal precipitation and in deriving the probability density function of areal precipitation. 127 refs., 30 figs., 13 tabs.

  15. Evaluating Satellite Products for Precipitation Estimation in Mountain Regions: A Case Study for Nepal

    Directory of Open Access Journals (Sweden)

    Tarendra Lakhankar

    2013-08-01

    Full Text Available Precipitation in mountain regions is often highly variable and poorly observed, limiting abilities to manage water resource challenges. Here, we evaluate remote sensing and ground station-based gridded precipitation products over Nepal against weather station precipitation observations on a monthly timescale. We find that the Tropical Rainfall Measuring Mission (TRMM 3B-43 precipitation product exhibits little mean bias and reasonable skill in giving precipitation over Nepal. Compared to station observations, the TRMM precipitation product showed an overall Nash-Sutcliffe efficiency of 0.49, which is similar to the skill of the gridded station-based product Asian Precipitation-Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE. The other satellite precipitation products considered (Global Satellite Mapping of Precipitation (GSMaP, the Climate Prediction Center Morphing technique (CMORPH, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS were less skillful, as judged by Nash-Sutcliffe efficiency, and, on average, substantially underestimated precipitation compared to station observations, despite their, in some cases, higher nominal spatial resolution compared to TRMM. None of the products fully captured the dependence of mean precipitation on elevation seen in the station observations. Overall, the TRMM product is promising for use in water resources applications.

  16. Smile line assessment comparing quantitative measurement and visual estimation

    NARCIS (Netherlands)

    Geld, P. Van der; Oosterveld, P.; Schols, J.; Kuijpers-Jagtman, A.M.

    2011-01-01

    INTRODUCTION: Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation

  17. Precipitation Estimation Using L-Band and C-Band Soil Moisture Retrievals

    Science.gov (United States)

    Koster, Randal D.; Brocca, Luca; Crow, Wade T.; Burgin, Mariko S.; De Lannoy, Gabrielle J. M.

    2016-01-01

    An established methodology for estimating precipitation amounts from satellite-based soil moisture retrievals is applied to L-band products from the Soil Moisture Active Passive (SMAP) and Soil Moisture and Ocean Salinity (SMOS) satellite missions and to a C-band product from the Advanced Scatterometer (ASCAT) mission. The precipitation estimates so obtained are evaluated against in situ (gauge-based) precipitation observations from across the globe. The precipitation estimation skill achieved using the L-band SMAP and SMOS data sets is higher than that obtained with the C-band product, as might be expected given that L-band is sensitive to a thicker layer of soil and thereby provides more information on the response of soil moisture to precipitation. The square of the correlation coefficient between the SMAP-based precipitation estimates and the observations (for aggregations to approximately100 km and 5 days) is on average about 0.6 in areas of high rain gauge density. Satellite missions specifically designed to monitor soil moisture thus do provide significant information on precipitation variability, information that could contribute to efforts in global precipitation estimation.

  18. Estimating mountain basin-mean precipitation from streamflow using Bayesian inference

    Science.gov (United States)

    Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Lundquist, Jessica D.

    2015-10-01

    Estimating basin-mean precipitation in complex terrain is difficult due to uncertainty in the topographical representativeness of precipitation gauges relative to the basin. To address this issue, we use Bayesian methodology coupled with a multimodel framework to infer basin-mean precipitation from streamflow observations, and we apply this approach to snow-dominated basins in the Sierra Nevada of California. Using streamflow observations, forcing data from lower-elevation stations, the Bayesian Total Error Analysis (BATEA) methodology and the Framework for Understanding Structural Errors (FUSE), we infer basin-mean precipitation, and compare it to basin-mean precipitation estimated using topographically informed interpolation from gauges (PRISM, the Parameter-elevation Regression on Independent Slopes Model). The BATEA-inferred spatial patterns of precipitation show agreement with PRISM in terms of the rank of basins from wet to dry but differ in absolute values. In some of the basins, these differences may reflect biases in PRISM, because some implied PRISM runoff ratios may be inconsistent with the regional climate. We also infer annual time series of basin precipitation using a two-step calibration approach. Assessment of the precision and robustness of the BATEA approach suggests that uncertainty in the BATEA-inferred precipitation is primarily related to uncertainties in hydrologic model structure. Despite these limitations, time series of inferred annual precipitation under different model and parameter assumptions are strongly correlated with one another, suggesting that this approach is capable of resolving year-to-year variability in basin-mean precipitation.

  19. Impact of time displaced precipitation estimates for on-line updated models

    DEFF Research Database (Denmark)

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2012-01-01

    When an online runoff model is updated from system measurements the requirements to the precipitation estimates change. Using rain gauge data as precipitation input there will be a displacement between the time where the rain intensity hits the gauge and the time where the rain hits the actual...

  20. Rain cell-based identification of the vertical profile of reflectivity as observed by weather radar and its use for precipitation uncertainty estimation

    Science.gov (United States)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Uijlenhoet, R.

    2012-04-01

    The wide scale implementation of weather radar systems over the last couple of decades has increased our understanding concerning spatio-temporal precipitation dynamics. However, the quantitative estimation of precipitation by these devices is affected by many sources of error. A very dominant source of error results from vertical variations in the hydrometeor size distribution known as the vertical profile of reflectivity (VPR). Since the height of the measurement as well as the beam volume increases with distance from the radar, for stratiform precipitation this results in a serious underestimation (overestimation) of the surface reflectivity while sampling within the snow (bright band) region. This research presents a precipitation cell-based implementation to correct volumetric weather radar measurements for VPR effects. Using the properties of a flipping carpenter square, a contour-based identification technique was developed, which is able to identify and track precipitation cells in real time, distinguishing between convective, stratiform and undefined precipitation. For the latter two types of systems, for each individual cell, a physically plausible vertical profile of reflectivity is estimated using a Monte Carlo optimization method. Since it can be expected that the VPR will vary within a given precipitation cell, a method was developed to take the uncertainty of the VPR estimate into account. As a result, we are able to estimate the amount of precipitation uncertainty as observed by weather radar due to VPR for a given precipitation type and storm cell. We demonstrate the possibilities of this technique for a number of winter precipitation systems observed within the Belgian Ardennes. For these systems, in general, the precipitation uncertainty estimate due to vertical reflectivity profile variations varies between 10-40%.

  1. Quantitative atom probe analysis of nanostructure containing clusters and precipitates with multiple length scales

    International Nuclear Information System (INIS)

    Marceau, R.K.W.; Stephenson, L.T.; Hutchinson, C.R.; Ringer, S.P.

    2011-01-01

    A model Al-3Cu-(0.05 Sn) (wt%) alloy containing a bimodal distribution of relatively shear-resistant θ' precipitates and shearable GP zones is considered in this study. It has recently been shown that the addition of the GP zones to such microstructures can lead to significant increases in strength without a decrease in the uniform elongation. In this study, atom probe tomography (APT) has been used to quantitatively characterise the evolution of the GP zones and the solute distribution in the bimodal microstructure as a function of applied plastic strain. Recent nuclear magnetic resonance (NMR) analysis has clearly shown strain-induced dissolution of the GP zones, which is supported by the current APT data with additional spatial information. There is significant repartitioning of Cu from the GP zones into the solid solution during deformation. A new approach for cluster finding in APT data has been used to quantitatively characterise the evolution of the sizes and shapes of the Cu containing features in the solid solution solute as a function of applied strain. -- Research highlights: → A new approach for cluster finding in atom probe tomography (APT) data has been used to quantitatively characterise the evolution of the sizes and shapes of the Cu containing features with multiple length scales. → In this study, a model Al-3Cu-(0.05 Sn) (wt%) alloy containing a bimodal distribution of relatively shear-resistant θ' precipitates and shearable GP zones is considered. → APT has been used to quantitatively characterise the evolution of the GP zones and the solute distribution in the bimodal microstructure as a function of applied plastic strain. → It is clearly shown that there is strain-induced dissolution of the GP zones with significant repartitioning of Cu from the GP zones into the solid solution during deformation.

  2. Smile line assessment comparing quantitative measurement and visual estimation.

    Science.gov (United States)

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  3. Rainfall estimation in SWAT: An alternative method to simulate orographic precipitation

    Science.gov (United States)

    Galván, L.; Olías, M.; Izquierdo, T.; Cerón, J. C.; Fernández de Villarán, R.

    2014-02-01

    The input of water from precipitation is one of the most important aspects of a hydrologic model because it controls the basin's water budget. The model should reproduce the amount and distribution of rainfall in the basin, spatially and temporally. SWAT (Soil and Water Assessment Tool) is one of the most widely used hydrologic models. In this paper the rainfall estimation in SWAT is revised, focusing on the treatment of orographic precipitation. SWAT was applied to the Odiel river basin (SW Spain), with a surface of 2300 km2. Results show that SWAT does not reflect reallisticaly the spatial distribution of rainfall in the basin. In relation to orographic precipitation, SWAT estimates the daily precipitation in elevation bands by adding a constant amount to the recorded precipitation in the rain gauge, which depends on the increase in precipitation with altitude and the difference between the mean elevation of each band and the elevation of the recording gauge. This does not reflect rainfall in the subbasin because the increase in precipitation with altitude actually it is not constant, but depends on the amount of rainfall. An alternative methodology to represent the temporal distribution of orographic precipitation is proposed. After simulation, the deviation of runoff volume using the SWAT elevation bands was appreciably higher than that obtained with the proposed methodology.

  4. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  5. Quantitative characterization and comparison of precipitate and grain shape in Nickel -base superalloys using moment invariants

    Science.gov (United States)

    Callahan, Patrick Gregory

    A fundamental objective of materials science and engineering is to understand the structure-property-processing-performance relationship. We need to know the true 3-D microstructure of a material to understand certain geometric properties of a material, and thus fulfill this objective. Focused ion beam (FIB) serial sectioning allows us to find the true 3-D microstructure of Ni-base superalloys. Once the true 3-D microstructure is obtained, an accurate quantitative description and characterization of precipitate and/or grain shapes is needed to understand the microstructure and describe it in an unbiased way. In this thesis, second order moment invariants, the shape quotient Q, a convexity measure relating the volume of an object to the volume of its convex hull, V/Vconv, and Gaussian curvature have been used to compare an experimentally observed polycrystalline IN100 microstructure to three synthetic microstructures. The three synthetic microstructures used different shape classes to produce starting grain shapes. The three shape classes are ellipsoids, superellipsoids, and the shapes generated when truncating a cube with an octahedron. The microstructures are compared using a distance measure, the Hellinger distance. The Hellinger distance is used to compare distributions of shape descriptors for the grains in each microstructure. The synthetic microstructure that has the smallest Hellinger distance, and so best matched the experimentally observed microstructure is the microstructure that used superellipsoids as a starting grain shape. While it has the smallest Hellinger distance, and is approaching realistic grain morphologies, the superellipsoidal microstructure is still not realistic. Second order moment invariants, Q, and V/V conv have also been used to characterize the γ' precipitate shapes from four experimental Ru-containing Ni-base superalloys with differences in alloying additions. The superalloys are designated UM-F9, UM-F18, UM-F19, and UM-F22. The

  6. THE QUANTITATIVE COMPONENT’S DIAGNOSIS OF THE ATMOSPHERIC PRECIPITATION CONDITION IN BAIA MARE URBAN AREA

    Directory of Open Access Journals (Sweden)

    S. ZAHARIA

    2012-12-01

    Full Text Available The atmospheric precipitation, an essential meteorological element for defining the climatic potential of a region, presents through its general and local particularities a defining influence for the evolution of the other climatic parameters, conditioning the structure of the overall geographic landscape. Their quantitative parameters sets up the regional natural setting and differentiation of water resources, soil, vegetation and fauna, in the same time influencing the majority of human activities’ aspects, through the generated impact over the agriculture, transportation, construction, for tourism etc. Especially, through the evolution of the related climatic parameters (production type, quantity, duration, frequency, intensity and their spatial and temporal fluctuations, the pluviometric extremes set out the maxim manifestation of the energy gap of the hydroclimatic hazards/risks which induce unfavourable or even damaging conditions for the human activities’ progress. Hence, the production of atmospheric precipitation surpluses conditions the triggering, or reactivation of some intense erosion processes, landslides, and last but not least, floods. Just as dangerous are the adverse amounts of precipitation or their absence on longer periods, determining the appearance of droughts, aridity phenomena, which if associated with the sharp anthropic pressure over the environment, favours the expansion of desertification, with the whole process of the arising negative effects. In this context, this paper aims to perform the diagnosis of atmospheric precipitation condition in Baia Mare urban area, through its quantitative component, in multiannual condition (1971-2007, underlining through the results of the analyzed climatic data and their interpretation, the main characteristics that define it. The data bank from Baia Mare station from the National Meteorological Administration network, representative for the chosen study area, was used. Baia

  7. Long-Term Precipitation Analysis and Estimation of Precipitation Concentration Index Using Three Support Vector Machine Methods

    Directory of Open Access Journals (Sweden)

    Milan Gocic

    2016-01-01

    Full Text Available The monthly precipitation data from 29 stations in Serbia during the period of 1946–2012 were considered. Precipitation trends were calculated using linear regression method. Three CLINO periods (1961–1990, 1971–2000, and 1981–2010 in three subregions were analysed. The CLINO 1981–2010 period had a significant increasing trend. Spatial pattern of the precipitation concentration index (PCI was presented. For the purpose of PCI prediction, three Support Vector Machine (SVM models, namely, SVM coupled with the discrete wavelet transform (SVM-Wavelet, the firefly algorithm (SVM-FFA, and using the radial basis function (SVM-RBF, were developed and used. The estimation and prediction results of these models were compared with each other using three statistical indicators, that is, root mean square error, coefficient of determination, and coefficient of efficiency. The experimental results showed that an improvement in predictive accuracy and capability of generalization can be achieved by the SVM-Wavelet approach. Moreover, the results indicated the proposed SVM-Wavelet model can adequately predict the PCI.

  8. A new methodology for pixel-quantitative precipitation nowcasting using a pyramid Lucas Kanade optical flow approach

    Science.gov (United States)

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang; Hong, Yang

    2015-10-01

    Short-term high-resolution Quantitative Precipitation Nowcasting (QPN) has important implications for navigation, flood forecasting, and other hydrological and meteorological concerns. This study proposes a new algorithm called Pixel-based QPN using the Pyramid Lucas-Kanade Optical Flow method (PPLK), which comprises three steps: employing a Pyramid Lucas-Kanade Optical Flow method (PLKOF) to estimate precipitation advection, projecting rainy clouds by considering the advection and evolution pixel by pixel, and interpolating QPN imagery based on the space-time continuum of cloud patches. The PPLK methodology was evaluated with 2338 images from the geostationary meteorological satellite Fengyun-2F (FY-2F) of China and compared with two other advection-based methods, i.e., the maximum correlation method and the Horn-Schunck Optical Flow scheme. The data sample covered all intensive observations since the launch of FY-2F, despite covering a total of only approximately 10 days. The results show that the PPLK performed better than the algorithms used for comparison, demonstrating less time expenditure, more effective cloud tracking, and improved QPN accuracy.

  9. Quantitative estimation of diacetylmorphine by preparative TLC and UV spectroscopy

    International Nuclear Information System (INIS)

    Khan, L.; Siddiqui, M.T.; Ahmad, N.; Shafi, N.

    2001-01-01

    A simple and efficient method for the quantitative estimation of di acetylmorphine in narcotic products has been described. Comparative TLC of narcotic specimens with standards showed presence of morphine, monoacetylmorphine, diacetylmorphine papaverine and noscapine, Resolution of the mixtures was achieved by preparative TLC. Bands corresponding to diacetylmorphine scraped, eluted UV absorption of extracts measured and contents quantified. (author)

  10. Quantitative estimation of pollution in groundwater and surface ...

    African Journals Online (AJOL)

    Quantitative estimation of pollution in groundwater and surface water in Benin City and environs. ... Ethiopian Journal of Environmental Studies and Management ... Physico-chemical parameters were compared with regulatory standards from Federal Ministry of Environment for drinking water and they all fell within ...

  11. REAL - Ensemble radar precipitation estimation for hydrology in a mountainous region

    OpenAIRE

    Germann, Urs; Berenguer Ferrer, Marc; Sempere Torres, Daniel; Zappa, Massimiliano

    2009-01-01

    An elegant solution to characterise the residual errors in radar precipitation estimates is to generate an ensemble of precipitation fields. The paper proposes a radar ensemble generator designed for usage in the Alps using LU decomposition (REAL), and presents first results from a real-time implementation coupling the radar ensemble with a semi-distributed rainfall–runoff model for flash flood modelling in a steep Alpine catchment. Each member of the radar ensemble is a possible realisati...

  12. Operational 0-3 h probabilistic quantitative precipitation forecasts: Recent performance and potential enhancements

    Science.gov (United States)

    Sokol, Z.; Kitzmiller, D.; Pešice, P.; Guan, S.

    2009-05-01

    The NOAA National Weather Service has maintained an automated, centralized 0-3 h prediction system for probabilistic quantitative precipitation forecasts since 2001. This advective-statistical system (ADSTAT) produces probabilities that rainfall will exceed multiple threshold values up to 50 mm at some location within a 40-km grid box. Operational characteristics and development methods for the system are described. Although development data were stratified by season and time of day, ADSTAT utilizes only a single set of nation-wide equations that relate predictor variables derived from radar reflectivity, lightning, satellite infrared temperatures, and numerical prediction model output to rainfall occurrence. A verification study documented herein showed that the operational ADSTAT reliably models regional variations in the relative frequency of heavy rain events. This was true even in the western United States, where no regional-scale, gridded hourly precipitation data were available during the development period in the 1990s. An effort was recently launched to improve the quality of ADSTAT forecasts by regionalizing the prediction equations and to adapt the model for application in the Czech Republic. We have experimented with incorporating various levels of regional specificity in the probability equations. The geographic localization study showed that in the warm season, regional climate differences and variations in the diurnal temperature cycle have a marked effect on the predictor-predictand relationships, and thus regionalization would lead to better statistical reliability in the forecasts.

  13. GPM Precipitation Estimates over the Walnut Gulch Experimental Watershed/LTAR site in Southeastern Arizona

    Science.gov (United States)

    Goodrich, D. C.; Tan, J.; Petersen, W. A.; Unkrich, C. C.; Demaria, E. M.; Hazenberg, P.; Lakshmi, V.

    2017-12-01

    Precipitation profiles from the GPM Core Observatory Dual-frequency Precipitation Radar (DPR) form part of the a priori database used in GPM Goddard Profiling (GPROF) algorithm passive microwave radiometer retrievals of rainfall. The GPROF retrievals are in turn used as high quality precipitation estimates in gridded products such as IMERG. Due to the variability in and high surface emissivity of land surfaces, GPROF performs precipitation retrievals as a function of surface classes. As such, different surface types may possess different error characteristics, especially over arid regions where high quality ground measurements are often lacking. Importantly, the emissive properties of land also result in GPROF rainfall estimates being driven primarily by the higher frequency radiometer channels (e.g., > 89 GHz) where precipitation signals are most sensitive to coupling between the ice-phase and rainfall production. In this study, we evaluate the rainfall estimates from the Ku channel of the DPR as well as GPROF estimates from various passive microwave sensors. Our evaluation is conducted at the level of individual satellite pixels (5 to 15 km in diameter), against a dense network of weighing rain gauges (90 in 150 km2) in the USDA-ARS Walnut Gulch Experimental Watershed and Long-Term Agroecosystem Research (LTAR) site in southeastern Arizona. The multiple gauges in each satellite pixel and precise accumulation about the overpass time allow a spatially and temporally representative comparison between the satellite estimates and ground reference. Over Walnut Gulch, both the Ku and GPROF estimates are challenged to delineate between rain and no-rain. Probabilities of detection are relatively high, but false alarm ratios are also high. The rain intensities possess a negative bias across nearly all sensors. It is likely that storm types, arid conditions and the highly variable precipitation regime present a challenge to both rainfall retrieval algorithms. An array of

  14. Surface Runoff Estimation Using SMOS Observations, Rain-gauge Measurements and Satellite Precipitation Estimations. Comparison with Model Predictions

    Science.gov (United States)

    Garcia Leal, Julio A.; Lopez-Baeza, Ernesto; Khodayar, Samiro; Estrela, Teodoro; Fidalgo, Arancha; Gabaldo, Onofre; Kuligowski, Robert; Herrera, Eddy

    Surface runoff is defined as the amount of water that originates from precipitation, does not infiltrates due to soil saturation and therefore circulates over the surface. A good estimation of runoff is useful for the design of draining systems, structures for flood control and soil utilisation. For runoff estimation there exist different methods such as (i) rational method, (ii) isochrone method, (iii) triangular hydrograph, (iv) non-dimensional SCS hydrograph, (v) Temez hydrograph, (vi) kinematic wave model, represented by the dynamics and kinematics equations for a uniforme precipitation regime, and (vii) SCS-CN (Soil Conservation Service Curve Number) model. This work presents a way of estimating precipitation runoff through the SCS-CN model, using SMOS (Soil Moisture and Ocean Salinity) mission soil moisture observations and rain-gauge measurements, as well as satellite precipitation estimations. The area of application is the Jucar River Basin Authority area where one of the objectives is to develop the SCS-CN model in a spatial way. The results were compared to simulations performed with the 7-km COSMO-CLM (COnsortium for Small-scale MOdelling, COSMO model in CLimate Mode) model. The use of SMOS soil moisture as input to the COSMO-CLM model will certainly improve model simulations.

  15. Estimation of the impact of climate change-induced extreme precipitation events on floods

    Science.gov (United States)

    Hlavčová, Kamila; Lapin, Milan; Valent, Peter; Szolgay, Ján; Kohnová, Silvia; Rončák, Peter

    2015-09-01

    In order to estimate possible changes in the flood regime in the mountainous regions of Slovakia, a simple physically-based concept for climate change-induced changes in extreme 5-day precipitation totals is proposed in the paper. It utilizes regionally downscaled scenarios of the long-term monthly means of the air temperature, specific air humidity and precipitation projected for Central Slovakia by two regional (RCM) and two global circulation models (GCM). A simplified physically-based model for the calculation of short-term precipitation totals over the course of changing air temperatures, which is used to drive a conceptual rainfall-runoff model, was proposed. In the paper a case study of this approach in the upper Hron river basin in Central Slovakia is presented. From the 1981-2010 period, 20 events of the basin's most extreme average of 5-day precipitation totals were selected. Only events with continual precipitation during 5 days were considered. These 5-day precipitation totals were modified according to the RCM and GCM-based scenarios for the future time horizons of 2025, 2050 and 2075. For modelling runoff under changed 5-day precipitation totals, a conceptual rainfall-runoff model developed at the Slovak University of Technology was used. Changes in extreme mean daily discharges due to climate change were compared with the original flood events and discussed.

  16. Development of Deep Learning Based Data Fusion Approach for Accurate Rainfall Estimation Using Ground Radar and Satellite Precipitation Products

    Science.gov (United States)

    Chen, H.; Chandra, C. V.; Tan, H.; Cifelli, R.; Xie, P.

    2016-12-01

    Rainfall estimation based on onboard satellite measurements has been an important topic in satellite meteorology for decades. A number of precipitation products at multiple time and space scales have been developed based upon satellite observations. For example, NOAA Climate Prediction Center has developed a morphing technique (i.e., CMORPH) to produce global precipitation products by combining existing space based rainfall estimates. The CMORPH products are essentially derived based on geostationary satellite IR brightness temperature information and retrievals from passive microwave measurements (Joyce et al. 2004). Although the space-based precipitation products provide an excellent tool for regional and global hydrologic and climate studies as well as improved situational awareness for operational forecasts, its accuracy is limited due to the sampling limitations, particularly for extreme events such as very light and/or heavy rain. On the other hand, ground-based radar is more mature science for quantitative precipitation estimation (QPE), especially after the implementation of dual-polarization technique and further enhanced by urban scale radar networks. Therefore, ground radars are often critical for providing local scale rainfall estimation and a "heads-up" for operational forecasters to issue watches and warnings as well as validation of various space measurements and products. The CASA DFW QPE system, which is based on dual-polarization X-band CASA radars and a local S-band WSR-88DP radar, has demonstrated its excellent performance during several years of operation in a variety of precipitation regimes. The real-time CASA DFW QPE products are used extensively for localized hydrometeorological applications such as urban flash flood forecasting. In this paper, a neural network based data fusion mechanism is introduced to improve the satellite-based CMORPH precipitation product by taking into account the ground radar measurements. A deep learning system is

  17. Effects of the Forecasting Methods, Precipitation Character, and Satellite Resolution on the Predictability of Short-Term Quantitative Precipitation Nowcasting (QPN from a Geostationary Satellite.

    Directory of Open Access Journals (Sweden)

    Yu Liu

    Full Text Available The prediction of the short-term quantitative precipitation nowcasting (QPN from consecutive gestational satellite images has important implications for hydro-meteorological modeling and forecasting. However, the systematic analysis of the predictability of QPN is limited. The objective of this study is to evaluate effects of the forecasting model, precipitation character, and satellite resolution on the predictability of QPN using images of a Chinese geostationary meteorological satellite Fengyun-2F (FY-2F which covered all intensive observation since its launch despite of only a total of approximately 10 days. In the first step, three methods were compared to evaluate the performance of the QPN methods: a pixel-based QPN using the maximum correlation method (PMC; the Horn-Schunck optical-flow scheme (PHS; and the Pyramid Lucas-Kanade Optical Flow method (PPLK, which is newly proposed here. Subsequently, the effect of the precipitation systems was indicated by 2338 imageries of 8 precipitation periods. Then, the resolution dependence was demonstrated by analyzing the QPN with six spatial resolutions (0.1atial, 0.3a, 0.4atial rand 0.6. The results show that the PPLK improves the predictability of QPN with better performance than the other comparison methods. The predictability of the QPN is significantly determined by the precipitation system, and a coarse spatial resolution of the satellite reduces the predictability of QPN.

  18. Effects of the Forecasting Methods, Precipitation Character, and Satellite Resolution on the Predictability of Short-Term Quantitative Precipitation Nowcasting (QPN) from a Geostationary Satellite.

    Science.gov (United States)

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang; Ji, Wei

    2015-01-01

    The prediction of the short-term quantitative precipitation nowcasting (QPN) from consecutive gestational satellite images has important implications for hydro-meteorological modeling and forecasting. However, the systematic analysis of the predictability of QPN is limited. The objective of this study is to evaluate effects of the forecasting model, precipitation character, and satellite resolution on the predictability of QPN using images of a Chinese geostationary meteorological satellite Fengyun-2F (FY-2F) which covered all intensive observation since its launch despite of only a total of approximately 10 days. In the first step, three methods were compared to evaluate the performance of the QPN methods: a pixel-based QPN using the maximum correlation method (PMC); the Horn-Schunck optical-flow scheme (PHS); and the Pyramid Lucas-Kanade Optical Flow method (PPLK), which is newly proposed here. Subsequently, the effect of the precipitation systems was indicated by 2338 imageries of 8 precipitation periods. Then, the resolution dependence was demonstrated by analyzing the QPN with six spatial resolutions (0.1atial, 0.3a, 0.4atial rand 0.6). The results show that the PPLK improves the predictability of QPN with better performance than the other comparison methods. The predictability of the QPN is significantly determined by the precipitation system, and a coarse spatial resolution of the satellite reduces the predictability of QPN.

  19. Evaluating the applicability of four recent satellite–gauge combined precipitation estimates for extreme precipitation and streamflow predictions over the upper Yellow river basin in China

    Science.gov (United States)

    This study aimed to statistically and hydrologically assess the performance of four latest and widely used satellite–gauge combined precipitation estimates (SGPEs), namely CRT, BLD, 3B42CDR, and 3B42 for the extreme precipitation and stream'ow scenarios over the upper Yellow river basin (UYRB) in ch...

  20. Stochastic evaluation of tsunami inundation and quantitative estimating tsunami risk

    International Nuclear Information System (INIS)

    Fukutani, Yo; Anawat, Suppasri; Abe, Yoshi; Imamura, Fumihiko

    2014-01-01

    We performed a stochastic evaluation of tsunami inundation by using results of stochastic tsunami hazard assessment at the Soma port in the Tohoku coastal area. Eleven fault zones along the Japan trench were selected as earthquake faults generating tsunamis. The results show that estimated inundation area of return period about 1200 years had good agreement with that in the 2011 Tohoku earthquake. In addition, we evaluated quantitatively tsunami risk for four types of building; a reinforced concrete, a steel, a brick and a wood at the Soma port by combining the results of inundation assessment and tsunami fragility assessment. The results of quantitative estimating risk would reflect properly vulnerability of the buildings, that the wood building has high risk and the reinforced concrete building has low risk. (author)

  1. Systematical estimation of GPM-based global satellite mapping of precipitation products over China

    Science.gov (United States)

    Zhao, Haigen; Yang, Bogang; Yang, Shengtian; Huang, Yingchun; Dong, Guotao; Bai, Juan; Wang, Zhiwei

    2018-03-01

    As the Global Precipitation Measurement (GPM) Core Observatory satellite continues its mission, new version 6 products for Global Satellite Mapping of Precipitation (GSMaP) have been released. However, few studies have systematically evaluated the GSMaP products over mainland China. This study quantitatively evaluated three GPM-based GSMaP version 6 precipitation products for China and eight subregions referring to the Chinese daily Precipitation Analysis Product (CPAP). The GSMaP products included near-real-time (GSMaP_NRT), microwave-infrared reanalyzed (GSMaP_MVK), and gauge-adjusted (GSMaP_Gau) data. Additionally, the gauge-adjusted Integrated Multi-Satellite Retrievals for Global Precipitation Measurement Mission (IMERG_Gau) was also assessed and compared with GSMaP_Gau. The analyses of the selected daily products were carried out at spatiotemporal resolutions of 1/4° for the period of March 2014 to December 2015 in consideration of the resolution of CPAP and the consistency of the coverage periods of the satellite products. The results indicated that GSMaP_MVK and GSMaP_NRT performed comparably and underdetected light rainfall events (Pearson linear correlation coefficient (CC), fractional standard error (FSE), and root-mean-square error (RMSE) metrics during the summer. Compared with GSMaP_NRT and GSMaP_MVK, GSMaP_Gau possessed significantly improved metrics over mainland China and the eight subregions and performed better in terms of CC, RMSE, and FSE but underestimated precipitation to a greater degree than IMERG_Gau. As a quantitative assessment of the GPM-era GSMaP products, these validation results will supply helpful references for both end users and algorithm developers. However, the study findings need to be confirmed over a longer future study period when the longer-period IMERG retrospectively-processed data are available.

  2. Evaluation of quantitative precipitation forecasts by TIGGE ensembles for south China during the presummer rainy season

    Science.gov (United States)

    Huang, Ling; Luo, Yali

    2017-08-01

    Based on The Observing System Research and Predictability Experiment Interactive Grand Global Ensemble (TIGGE) data set, this study evaluates the ability of global ensemble prediction systems (EPSs) from the European Centre for Medium-Range Weather Forecasts (ECMWF), U.S. National Centers for Environmental Prediction, Japan Meteorological Agency (JMA), Korean Meteorological Administration, and China Meteorological Administration (CMA) to predict presummer rainy season (April-June) precipitation in south China. Evaluation of 5 day forecasts in three seasons (2013-2015) demonstrates the higher skill of probability matching forecasts compared to simple ensemble mean forecasts and shows that the deterministic forecast is a close second. The EPSs overestimate light-to-heavy rainfall (0.1 to 30 mm/12 h) and underestimate heavier rainfall (>30 mm/12 h), with JMA being the worst. By analyzing the synoptic situations predicted by the identified more skillful (ECMWF) and less skillful (JMA and CMA) EPSs and the ensemble sensitivity for four representative cases of torrential rainfall, the transport of warm-moist air into south China by the low-level southwesterly flow, upstream of the torrential rainfall regions, is found to be a key synoptic factor that controls the quantitative precipitation forecast. The results also suggest that prediction of locally produced torrential rainfall is more challenging than prediction of more extensively distributed torrential rainfall. A slight improvement in the performance is obtained by shortening the forecast lead time from 30-36 h to 18-24 h to 6-12 h for the cases with large-scale forcing, but not for the locally produced cases.

  3. Estimating Reservoir Inflow Using RADAR Forecasted Precipitation and Adaptive Neuro Fuzzy Inference System

    Science.gov (United States)

    Yi, J.; Choi, C.

    2014-12-01

    Rainfall observation and forecasting using remote sensing such as RADAR(Radio Detection and Ranging) and satellite images are widely used to delineate the increased damage by rapid weather changeslike regional storm and flash flood. The flood runoff was calculated by using adaptive neuro-fuzzy inference system, the data driven models and MAPLE(McGill Algorithm for Precipitation Nowcasting by Lagrangian Extrapolation) forecasted precipitation data as the input variables.The result of flood estimation method using neuro-fuzzy technique and RADAR forecasted precipitation data was evaluated by comparing it with the actual data.The Adaptive Neuro Fuzzy method was applied to the Chungju Reservoir basin in Korea. The six rainfall events during the flood seasons in 2010 and 2011 were used for the input data.The reservoir inflow estimation results were comparedaccording to the rainfall data used for training, checking and testing data in the model setup process. The results of the 15 models with the combination of the input variables were compared and analyzed. Using the relatively larger clustering radius and the biggest flood ever happened for training data showed the better flood estimation in this study.The model using the MAPLE forecasted precipitation data showed better result for inflow estimation in the Chungju Reservoir.

  4. Radar rainfall estimation of stratiform winter precipitation in the Belgian Ardennes

    NARCIS (Netherlands)

    Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.

    2011-01-01

    Radars are known for their ability to obtain a wealth of information about spatial storm field characteristics. Unfortunately, rainfall estimates obtained by this instrument are known to be affected by multiple sources of error. Especially for stratiform precipitation systems, the quality of radar

  5. Improving real-time estimation of heavy-to-extreme precipitation using rain gauge data via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Seo, Dong-Jun; Siddique, Ridwan; Zhang, Yu; Kim, Dongsoo

    2014-11-01

    A new technique for gauge-only precipitation analysis for improved estimation of heavy-to-extreme precipitation is described and evaluated. The technique is based on a novel extension of classical optimal linear estimation theory in which, in addition to error variance, Type-II conditional bias (CB) is explicitly minimized. When cast in the form of well-known kriging, the methodology yields a new kriging estimator, referred to as CB-penalized kriging (CBPK). CBPK, however, tends to yield negative estimates in areas of no or light precipitation. To address this, an extension of CBPK, referred to herein as extended conditional bias penalized kriging (ECBPK), has been developed which combines the CBPK estimate with a trivial estimate of zero precipitation. To evaluate ECBPK, we carried out real-world and synthetic experiments in which ECBPK and the gauge-only precipitation analysis procedure used in the NWS's Multisensor Precipitation Estimator (MPE) were compared for estimation of point precipitation and mean areal precipitation (MAP), respectively. The results indicate that ECBPK improves hourly gauge-only estimation of heavy-to-extreme precipitation significantly. The improvement is particularly large for estimation of MAP for a range of combinations of basin size and rain gauge network density. This paper describes the technique, summarizes the results and shares ideas for future research.

  6. Estimation of precipitable water at different locations using surface dew-point

    Science.gov (United States)

    Abdel Wahab, M.; Sharif, T. A.

    1995-09-01

    The Reitan (1963) regression equation of the form ln w = a + bT d has been examined and tested to estimate precipitable water vapor content from the surface dew point temperature at different locations. The results of this study indicate that the slope b of the above equation has a constant value of 0.0681, while the intercept a changes rapidly with latitude. The use of the variable intercept technique can improve the estimated result by about 2%.

  7. Evaluation of spatial and spatiotemporal estimation methods in simulation of precipitation variability patterns

    Science.gov (United States)

    Bayat, Bardia; Zahraie, Banafsheh; Taghavi, Farahnaz; Nasseri, Mohsen

    2013-08-01

    Identification of spatial and spatiotemporal precipitation variations plays an important role in different hydrological applications such as missing data estimation. In this paper, the results of Bayesian maximum entropy (BME) and ordinary kriging (OK) are compared for modeling spatial and spatiotemporal variations of annual precipitation with and without incorporating elevation variations. The study area of this research is Namak Lake watershed located in the central part of Iran with an area of approximately 90,000 km2. The BME and OK methods have been used to model the spatial and spatiotemporal variations of precipitation in this watershed, and their performances have been evaluated using cross-validation statistics. The results of the case study have shown the superiority of BME over OK in both spatial and spatiotemporal modes. The results have shown that BME estimates are less biased and more accurate than OK. The improvements in the BME estimates are mostly related to incorporating hard and soft data in the estimation process, which resulted in more detailed and reliable results. Estimation error variance for BME results is less than OK estimations in the study area in both spatial and spatiotemporal modes.

  8. Hydrological Storage Length Scales Represented by Remote Sensing Estimates of Soil Moisture and Precipitation

    Science.gov (United States)

    Akbar, Ruzbeh; Short Gianotti, Daniel; McColl, Kaighin A.; Haghighi, Erfan; Salvucci, Guido D.; Entekhabi, Dara

    2018-03-01

    The soil water content profile is often well correlated with the soil moisture state near the surface. They share mutual information such that analysis of surface-only soil moisture is, at times and in conjunction with precipitation information, reflective of deeper soil fluxes and dynamics. This study examines the characteristic length scale, or effective depth Δz, of a simple active hydrological control volume. The volume is described only by precipitation inputs and soil water dynamics evident in surface-only soil moisture observations. To proceed, first an observation-based technique is presented to estimate the soil moisture loss function based on analysis of soil moisture dry-downs and its successive negative increments. Then, the length scale Δz is obtained via an optimization process wherein the root-mean-squared (RMS) differences between surface soil moisture observations and its predictions based on water balance are minimized. The process is entirely observation-driven. The surface soil moisture estimates are obtained from the NASA Soil Moisture Active Passive (SMAP) mission and precipitation from the gauge-corrected Climate Prediction Center daily global precipitation product. The length scale Δz exhibits a clear east-west gradient across the contiguous United States (CONUS), such that large Δz depths (>200 mm) are estimated in wetter regions with larger mean precipitation. The median Δz across CONUS is 135 mm. The spatial variance of Δz is predominantly explained and influenced by precipitation characteristics. Soil properties, especially texture in the form of sand fraction, as well as the mean soil moisture state have a lesser influence on the length scale.

  9. Local likelihood estimation of complex tail dependence structures in high dimensions, applied to US precipitation extremes

    KAUST Repository

    Camilo, Daniela Castro

    2017-10-02

    In order to model the complex non-stationary dependence structure of precipitation extremes over the entire contiguous U.S., we propose a flexible local approach based on factor copula models. Our sub-asymptotic spatial modeling framework yields non-trivial tail dependence structures, with a weakening dependence strength as events become more extreme, a feature commonly observed with precipitation data but not accounted for in classical asymptotic extreme-value models. To estimate the local extremal behavior, we fit the proposed model in small regional neighborhoods to high threshold exceedances, under the assumption of local stationarity. This allows us to gain in flexibility, while making inference for such a large and complex dataset feasible. Adopting a local censored likelihood approach, inference is made on a fine spatial grid, and local estimation is performed taking advantage of distributed computing resources and of the embarrassingly parallel nature of this estimation procedure. The local model is efficiently fitted at all grid points, and uncertainty is measured using a block bootstrap procedure. An extensive simulation study shows that our approach is able to adequately capture complex, non-stationary dependencies, while our study of U.S. winter precipitation data reveals interesting differences in local tail structures over space, which has important implications on regional risk assessment of extreme precipitation events. A comparison between past and current data suggests that extremes in certain areas might be slightly wider in extent nowadays than during the first half of the twentieth century.

  10. Local likelihood estimation of complex tail dependence structures in high dimensions, applied to US precipitation extremes

    KAUST Repository

    Camilo, Daniela Castro; Huser, Raphaë l

    2017-01-01

    In order to model the complex non-stationary dependence structure of precipitation extremes over the entire contiguous U.S., we propose a flexible local approach based on factor copula models. Our sub-asymptotic spatial modeling framework yields non-trivial tail dependence structures, with a weakening dependence strength as events become more extreme, a feature commonly observed with precipitation data but not accounted for in classical asymptotic extreme-value models. To estimate the local extremal behavior, we fit the proposed model in small regional neighborhoods to high threshold exceedances, under the assumption of local stationarity. This allows us to gain in flexibility, while making inference for such a large and complex dataset feasible. Adopting a local censored likelihood approach, inference is made on a fine spatial grid, and local estimation is performed taking advantage of distributed computing resources and of the embarrassingly parallel nature of this estimation procedure. The local model is efficiently fitted at all grid points, and uncertainty is measured using a block bootstrap procedure. An extensive simulation study shows that our approach is able to adequately capture complex, non-stationary dependencies, while our study of U.S. winter precipitation data reveals interesting differences in local tail structures over space, which has important implications on regional risk assessment of extreme precipitation events. A comparison between past and current data suggests that extremes in certain areas might be slightly wider in extent nowadays than during the first half of the twentieth century.

  11. Development of Radar-Satellite Blended QPF (Quantitative Precipitation Forecast) Technique for heavy rainfall

    Science.gov (United States)

    Jang, Sangmin; Yoon, Sunkwon; Rhee, Jinyoung; Park, Kyungwon

    2016-04-01

    Due to the recent extreme weather and climate change, a frequency and size of localized heavy rainfall increases and it may bring various hazards including sediment-related disasters, flooding and inundation. To prevent and mitigate damage from such disasters, very short range forecasting and nowcasting of precipitation amounts are very important. Weather radar data very useful in monitoring and forecasting because weather radar has high resolution in spatial and temporal. Generally, extrapolation based on the motion vector is the best method of precipitation forecasting using radar rainfall data for a time frame within a few hours from the present. However, there is a need for improvement due to the radar rainfall being less accurate than rain-gauge on surface. To improve the radar rainfall and to take advantage of the COMS (Communication, Ocean and Meteorological Satellite) data, a technique to blend the different data types for very short range forecasting purposes was developed in the present study. The motion vector of precipitation systems are estimated using 1.5km CAPPI (Constant Altitude Plan Position Indicator) reflectivity by pattern matching method, which indicates the systems' direction and speed of movement and blended radar-COMS rain field is used for initial data. Since the original horizontal resolution of COMS is 4 km while that of radar is about 1 km, spatial downscaling technique is used to downscale the COMS data from 4 to 1 km pixels in order to match with the radar data. The accuracies of rainfall forecasting data were verified utilizing AWS (Automatic Weather System) observed data for an extreme rainfall occurred in the southern part of Korean Peninsula on 25 August 2014. The results of this study will be used as input data for an urban stream real-time flood early warning system and a prediction model of landslide. Acknowledgement This research was supported by a grant (13SCIPS04) from Smart Civil Infrastructure Research Program funded by

  12. Precipitation areal-reduction factor estimation using an annual-maxima centered approach

    Science.gov (United States)

    Asquith, W.H.; Famiglietti, J.S.

    2000-01-01

    The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are often computed by multiplying point depths by areal-reduction factors (ARF). ARF range from 0 to 1, vary according to storm characteristics, such as recurrence interval; and are a function of watershed characteristics, such as watershed size, shape, and geographic location. This paper presents a new approach for estimating ARF and includes applications for the 1-day design storm in Austin, Dallas, and Houston, Texas. The approach, termed 'annual-maxima centered,' specifically considers the distribution of concurrent precipitation surrounding an annual-precipitation maxima, which is a feature not seen in other approaches. The approach does not require the prior spatial averaging of precipitation, explicit determination of spatial correlation coefficients, nor explicit definition of a representative area of a particular storm in the analysis. The annual-maxima centered approach was designed to exploit the wide availability of dense precipitation gauge data in many regions of the world. The approach produces ARF that decrease more rapidly than those from TP-29. Furthermore, the ARF from the approach decay rapidly with increasing recurrence interval of the annual-precipitation maxima. (C) 2000 Elsevier Science B.V.The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are

  13. Quantitative Estimation of Transmitted and Reflected Lamb Waves at Discontinuity

    International Nuclear Information System (INIS)

    Lim, Hyung Jin; Sohn, Hoon

    2010-01-01

    For the application of Lamb wave to structural health monitoring(SHM), understanding its physical characteristic and interaction between Lamb wave and defect of the host structure is an important issue. In this study, reflected, transmitted and mode converted Lamb waves at discontinuity of a plate structure were simulated and the amplitude ratios are calculated theoretically using Modal decomposition method. The predicted results were verified comparing with finite element method(FEM) and experimental results simulating attached PZTs. The result shows that the theoretical prediction is close to the FEM and the experimental verification. Moreover, quantitative estimation method was suggested using amplitude ratio of Lamb wave at discontinuity

  14. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    Science.gov (United States)

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  15. Operational Estimation of Accumulated Precipitation using Satellite Observation, by Eumetsat Satellite Application facility in Support to Hydrology (H-SAF Consortium).

    Science.gov (United States)

    di Diodato, A.; de Leonibus, L.; Zauli, F.; Biron, D.; Melfi, D.

    2009-04-01

    compared by climatic thresholds got, basically, by the project "Climate Atlas of Europe" led by Meteo France inside the project ECSN (European Climate Support Network) of EUMETNET. To reduce the bias errors introduced by satellite estimates the rain gauge data are used to make an intercalibration with the satellite estimates, using information achieved by GTS network. Precipitation increments are estimated at each observation location from the observation and the interpolated background field. A field of the increments is carried out by standard Kriging method. The final precipitation analysis is achieved by the sum of the increments and the precipitation estimation at each grid points. It is also considered that major error sources in retrieval 15 minutes instantaneous precipitation from cloud top temperature comes from high (cold) non precipitating clouds and the use of same regression coefficients both for warm clouds (stratus) and cold clouds (convective). As that error is intrinsic in the blending technique applied, we are going to improve performances making use of cloud type specified retrievals. To apply such scheme on the products, we apply a discrimination from convective and stratified clouds, then we retrieve precipitation in parallel for the two clouds classes; the two outputs are merged again into one products, solving the double retrieval pixels keeping the convection retrieval. Basic tools for that is the computation of two different lookup tables to associate precipitation at a brightness temperature for the two kinds of cloudiness. The clouds discrimination will be done by the NWC-SAF product named "cloud type" for the stratified clouds and with an application, running operationally at Italian Met Service, named NEFODINA for automatic detection of convective phenomena. Results of studies to improve the accumulated precipitation as well are presented. The studies exploit the potential to use other source of information like quantitative precipitation

  16. An "Ensemble Approach" to Modernizing Extreme Precipitation Estimation for Dam Safety Decision-Making

    Science.gov (United States)

    Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.

    2017-12-01

    To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the

  17. Radar rainfall estimation of stratiform winter precipitation in the Belgian Ardennes

    Science.gov (United States)

    Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.

    2011-02-01

    Radars are known for their ability to obtain a wealth of information about spatial storm field characteristics. Unfortunately, rainfall estimates obtained by this instrument are known to be affected by multiple sources of error. Especially for stratiform precipitation systems, the quality of radar rainfall estimates starts to decrease at relatively close ranges. In the current study, the hydrological potential of weather radar is analyzed during a winter half-year for the hilly region of the Belgian Ardennes. A correction algorithm is proposed which corrects the radar data for errors related to attenuation, ground clutter, anomalous propagation, the vertical profile of reflectivity (VPR), and advection. No final bias correction with respect to rain gauge data was implemented because such an adjustment would not add to a better understanding of the quality of the radar data. The impact of the different corrections is assessed using rainfall information sampled by 42 hourly rain gauges. The largest improvement in the quality of the radar data is obtained by correcting for ground clutter. The impact of VPR correction and advection depends on the spatial variability and velocity of the precipitation system. Overall during the winter period, the radar underestimates the amount of precipitation as compared to the rain gauges. Remaining differences between both instruments can be attributed to spatial and temporal variability in the type of precipitation, which has not been taken into account.

  18. Kriging and local polynomial methods for blending satellite-derived and gauge precipitation estimates to support hydrologic early warning systems

    Science.gov (United States)

    Verdin, Andrew; Funk, Christopher C.; Rajagopalan, Balaji; Kleiber, William

    2016-01-01

    Robust estimates of precipitation in space and time are important for efficient natural resource management and for mitigating natural hazards. This is particularly true in regions with developing infrastructure and regions that are frequently exposed to extreme events. Gauge observations of rainfall are sparse but capture the precipitation process with high fidelity. Due to its high resolution and complete spatial coverage, satellite-derived rainfall data are an attractive alternative in data-sparse regions and are often used to support hydrometeorological early warning systems. Satellite-derived precipitation data, however, tend to underrepresent extreme precipitation events. Thus, it is often desirable to blend spatially extensive satellite-derived rainfall estimates with high-fidelity rain gauge observations to obtain more accurate precipitation estimates. In this research, we use two different methods, namely, ordinary kriging and κ-nearest neighbor local polynomials, to blend rain gauge observations with the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates in data-sparse Central America and Colombia. The utility of these methods in producing blended precipitation estimates at pentadal (five-day) and monthly time scales is demonstrated. We find that these blending methods significantly improve the satellite-derived estimates and are competitive in their ability to capture extreme precipitation.

  19. Quantitative measurement for the microstructural parameters of nano-precipitates in Al-Mg-Si-Cu alloys

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kai [School of Metallurgy and Environment, Central South University, Changsha 410083 (China); Electron Microscopy for Materials Science (EMAT), University of Antwerp, Antwerp B-2020 (Belgium); State Key Laboratory of Powder Metallurgy, Central South University, Changsha 410083 (China); Idrissi, Hosni [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Antwerp B-2020 (Belgium); Institute of Mechanics, Materials and Civil Engineering (iMMC), Université catholique de Louvain, Place Sainte Barbe 2, B-1348 Louvain-la-Neuve (Belgium); Sha, Gang [Gleiter Institute of Nano-science, Nanjing University of Science and Technology, Nanjing 210094 (China); Song, Min, E-mail: msong@csu.edu.cn [State Key Laboratory of Powder Metallurgy, Central South University, Changsha 410083 (China); Lu, Jiangbo [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Antwerp B-2020 (Belgium); Electronic Materials Research Laboratory, Key Laboratory of the Ministry of Education and International Center for Dielectric Research, Xi' an Jiaotong University, Xi' an 710049 (China); Shi, Hui [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Antwerp B-2020 (Belgium); ArcelorMittal Global R& D Gent, Pres. J.F. Kennedylaan 3 Zelzate, Ghent B-9060 (Belgium); Wang, Wanlin [School of Metallurgy and Environment, Central South University, Changsha 410083 (China); Ringer, Simon P. [Australian Institute for Nanoscale Science and Technology, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Du, Yong [State Key Laboratory of Powder Metallurgy, Central South University, Changsha 410083 (China); Schryvers, Dominique [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Antwerp B-2020 (Belgium)

    2016-08-15

    Size, number density and volume fraction of nano-precipitates are important microstructural parameters controlling the strengthening of materials. In this work a widely accessible, convenient, moderately time efficient method with acceptable accuracy and precision has been provided for measurement of volume fraction of nano-precipitates in crystalline materials. The method is based on the traditional but highly accurate technique of measuring foil thickness via convergent beam electron diffraction. A new equation is proposed and verified with the aid of 3-dimensional atom probe (3DAP) analysis, to compensate for the additional error resulted from the hardly distinguishable contrast of too short incomplete precipitates cut by the foil surface. The method can be performed on a regular foil specimen with a modern LaB{sub 6} or field-emission-gun transmission electron microscope. Precisions around ± 16% have been obtained for precipitate volume fractions of needle-like β″/C and Q precipitates in an aged Al-Mg-Si-Cu alloy. The measured number density is close to that directly obtained using 3DAP analysis by a misfit of 4.5%, and the estimated precision for number density measurement is about ± 11%. The limitations of the method are also discussed. - Highlights: •A facile method for measuring volume fraction of nano-precipitates based on CBED •An equation to compensate for small invisible precipitates, with 3DAP verification •Precisions around ± 16% for volume fraction and ± 11% for number density.

  20. Quantitative estimation of Nipah virus replication kinetics in vitro

    Directory of Open Access Journals (Sweden)

    Hassan Sharifah

    2006-06-01

    Full Text Available Abstract Background Nipah virus is a zoonotic virus isolated from an outbreak in Malaysia in 1998. The virus causes infections in humans, pigs, and several other domestic animals. It has also been isolated from fruit bats. The pathogenesis of Nipah virus infection is still not well described. In the present study, Nipah virus replication kinetics were estimated from infection of African green monkey kidney cells (Vero using the one-step SYBR® Green I-based quantitative real-time reverse transcriptase-polymerase chain reaction (qRT-PCR assay. Results The qRT-PCR had a dynamic range of at least seven orders of magnitude and can detect Nipah virus from as low as one PFU/μL. Following initiation of infection, it was estimated that Nipah virus RNA doubles at every ~40 minutes and attained peak intracellular virus RNA level of ~8.4 log PFU/μL at about 32 hours post-infection (PI. Significant extracellular Nipah virus RNA release occurred only after 8 hours PI and the level peaked at ~7.9 log PFU/μL at 64 hours PI. The estimated rate of Nipah virus RNA released into the cell culture medium was ~0.07 log PFU/μL per hour and less than 10% of the released Nipah virus RNA was infectious. Conclusion The SYBR® Green I-based qRT-PCR assay enabled quantitative assessment of Nipah virus RNA synthesis in Vero cells. A low rate of Nipah virus extracellular RNA release and low infectious virus yield together with extensive syncytial formation during the infection support a cell-to-cell spread mechanism for Nipah virus infection.

  1. The assessment of Global Precipitation Measurement estimates over the Indian subcontinent

    Science.gov (United States)

    Murali Krishna, U. V.; Das, Subrata Kumar; Deshpande, Sachin M.; Doiphode, S. L.; Pandithurai, G.

    2017-08-01

    Accurate and real-time precipitation estimation is a challenging task for current and future spaceborne measurements, which is essential to understand the global hydrological cycle. Recently, the Global Precipitation Measurement (GPM) satellites were launched as a next-generation rainfall mission for observing the global precipitation characteristics. The purpose of the GPM is to enhance the spatiotemporal resolution of global precipitation. The main objective of the present study is to assess the rainfall products from the GPM, especially the Integrated Multi-satellitE Retrievals for the GPM (IMERG) data by comparing with the ground-based observations. The multitemporal scale evaluations of rainfall involving subdaily, diurnal, monthly, and seasonal scales were performed over the Indian subcontinent. The comparison shows that the IMERG performed better than the Tropical Rainfall Measuring Mission (TRMM)-3B42, although both rainfall products underestimated the observed rainfall compared to the ground-based measurements. The analyses also reveal that the TRMM-3B42 and IMERG data sets are able to represent the large-scale monsoon rainfall spatial features but are having region-specific biases. The IMERG shows significant improvement in low rainfall estimates compared to the TRMM-3B42 for selected regions. In the spatial distribution, the IMERG shows higher rain rates compared to the TRMM-3B42, due to its enhanced spatial and temporal resolutions. Apart from this, the characteristics of raindrop size distribution (DSD) obtained from the GPM mission dual-frequency precipitation radar is assessed over the complex mountain terrain site in the Western Ghats, India, using the DSD measured by a Joss-Waldvogel disdrometer.

  2. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    Science.gov (United States)

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  3. Estimating Loess Plateau Average Annual Precipitation with Multiple Linear Regression Kriging and Geographically Weighted Regression Kriging

    Directory of Open Access Journals (Sweden)

    Qiutong Jin

    2016-06-01

    Full Text Available Estimating the spatial distribution of precipitation is an important and challenging task in hydrology, climatology, ecology, and environmental science. In order to generate a highly accurate distribution map of average annual precipitation for the Loess Plateau in China, multiple linear regression Kriging (MLRK and geographically weighted regression Kriging (GWRK methods were employed using precipitation data from the period 1980–2010 from 435 meteorological stations. The predictors in regression Kriging were selected by stepwise regression analysis from many auxiliary environmental factors, such as elevation (DEM, normalized difference vegetation index (NDVI, solar radiation, slope, and aspect. All predictor distribution maps had a 500 m spatial resolution. Validation precipitation data from 130 hydrometeorological stations were used to assess the prediction accuracies of the MLRK and GWRK approaches. Results showed that both prediction maps with a 500 m spatial resolution interpolated by MLRK and GWRK had a high accuracy and captured detailed spatial distribution data; however, MLRK produced a lower prediction error and a higher variance explanation than GWRK, although the differences were small, in contrast to conclusions from similar studies.

  4. Estimating Probable Maximum Precipitation by Considering Combined Effect of Typhoon and Southwesterly Air Flow

    Directory of Open Access Journals (Sweden)

    Cheng-Chin Liu

    2016-01-01

    Full Text Available Typhoon Morakot hit southern Taiwan in 2009, bringing 48-hr of heavy rainfall [close to the Probable Maximum Precipitation (PMP] to the Tsengwen Reservoir catchment. This extreme rainfall event resulted from the combined (co-movement effect of two climate systems (i.e., typhoon and southwesterly air flow. Based on the traditional PMP estimation method (i.e., the storm transposition method, STM, two PMP estimation approaches, i.e., Amplification Index (AI and Independent System (IS approaches, which consider the combined effect are proposed in this work. The AI approach assumes that the southwesterly air flow precipitation in a typhoon event could reach its maximum value. The IS approach assumes that the typhoon and southwesterly air flow are independent weather systems. Based on these assumptions, calculation procedures for the two approaches were constructed for a case study on the Tsengwen Reservoir catchment. The results show that the PMP estimates for 6- to 60-hr durations using the two approaches are approximately 30% larger than the PMP estimates using the traditional STM without considering the combined effect. This work is a pioneer PMP estimation method that considers the combined effect of a typhoon and southwesterly air flow. Further studies on this issue are essential and encouraged.

  5. Quantitative study of substorm-associated VLF phase anomalies and precipitating energetic electrons on November 13, 1979

    International Nuclear Information System (INIS)

    Kikuchi, T.; Evans, D.S.

    1983-01-01

    The phase anomalies associated with substorms are observed on VLF signals propagating on transauroral paths (transmitters at OMEGA-ALDRA (13.6 kHz), GBR (16.0 kHz), and OMEGA--NORTH DAKOTA (13.6 kHz)) which were continually received at Inubo, Japan, during the events on November 13, 1979. Detailed comparisons are made between these phase anomalies and geomagnetic bays, and quantitative relations are obtained with precipitating energetic electrons (E>30, E>100, and E>300 keV) detected on board the TIROS-N and NOAA 6 satellites. It is concluded that two types of VLF phase anomalies exist which, in turn, are associated with two phases in the history of energetic electron precipitation into the atmosphere. The first type of phase anomaly is associated with direct injection of energetic electrons into the outer magnetosphere and atmosphere which, in turn, is completely correlated in time with development of the auroral electrojet current system. The second type arises from energetic electrons which subsequently precipitate from a trapped electron population and has a delayed onset and prolonged duration. An excellent quantitative correlation is obtained between the logarithm of the electron flux and the magnitude of the phase anomaly on the OMEGA-ALDRA signal. From the local time characteristics of this quantitative relation it is deduced that the electrons with E>300 keV are the main source of D region ionization responsible for the VLF phase anomaly

  6. Near-real-time Estimation and Forecast of Total Precipitable Water in Europe

    Science.gov (United States)

    Bartholy, J.; Kern, A.; Barcza, Z.; Pongracz, R.; Ihasz, I.; Kovacs, R.; Ferencz, C.

    2013-12-01

    Information about the amount and spatial distribution of atmospheric water vapor (or total precipitable water) is essential for understanding weather and the environment including the greenhouse effect, the climate system with its feedbacks and the hydrological cycle. Numerical weather prediction (NWP) models need accurate estimations of water vapor content to provide realistic forecasts including representation of clouds and precipitation. In the present study we introduce our research activity for the estimation and forecast of atmospheric water vapor in Central Europe using both observations and models. The Eötvös Loránd University (Hungary) operates a polar orbiting satellite receiving station in Budapest since 2002. This station receives Earth observation data from polar orbiting satellites including MODerate resolution Imaging Spectroradiometer (MODIS) Direct Broadcast (DB) data stream from satellites Terra and Aqua. The received DB MODIS data are automatically processed using freely distributed software packages. Using the IMAPP Level2 software total precipitable water is calculated operationally using two different methods. Quality of the TPW estimations is a crucial question for further application of the results, thus validation of the remotely sensed total precipitable water fields is presented using radiosonde data. In a current research project in Hungary we aim to compare different estimations of atmospheric water vapor content. Within the frame of the project we use a NWP model (DBCRAS; Direct Broadcast CIMSS Regional Assimilation System numerical weather prediction software developed by the University of Wisconsin, Madison) to forecast TPW. DBCRAS uses near real time Level2 products from the MODIS data processing chain. From the wide range of the derived Level2 products the MODIS TPW parameter found within the so-called mod07 results (Atmospheric Profiles Product) and the cloud top pressure and cloud effective emissivity parameters from the so

  7. Operational 0–3 h probabilistic quantitative precipitation forecasts: Recent performance and potential enhancements

    Czech Academy of Sciences Publication Activity Database

    Sokol, Zbyněk; Kitzmiller, D.; Pešice, Petr; Guan, S.

    2009-01-01

    Roč. 92, č. 3 (2009), s. 318-330 ISSN 0169-8095. [International workshop on precipitation in urban areas /7./. St. Moritz, 07.12.2006-10.12.2006] R&D Projects: GA MŠk 1P05ME748 Institutional research plan: CEZ:AV0Z30420517 Keywords : Precipitation * Prediction * Convection * Radar * Nowcasting Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.811, year: 2009

  8. Evaluation of precipitation estimates over CONUS derived from satellite, radar, and rain gauge data sets at daily to annual scales (2002-2012)

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.

    2015-04-01

    We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, and surface observations to derive precipitation characteristics over the contiguous United States (CONUS) for the period 2002-2012. This comparison effort includes satellite multi-sensor data sets (bias-adjusted TMPA 3B42, near-real-time 3B42RT), radar estimates (NCEP Stage IV), and rain gauge observations. Remotely sensed precipitation data sets are compared with surface observations from the Global Historical Climatology Network-Daily (GHCN-D) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model). The comparisons are performed at the annual, seasonal, and daily scales over the River Forecast Centers (RFCs) for CONUS. Annual average rain rates present a satisfying agreement with GHCN-D for all products over CONUS (±6%). However, differences at the RFC are more important in particular for near-real-time 3B42RT precipitation estimates (-33 to +49%). At annual and seasonal scales, the bias-adjusted 3B42 presented important improvement when compared to its near-real-time counterpart 3B42RT. However, large biases remained for 3B42 over the western USA for higher average accumulation (≥ 5 mm day-1) with respect to GHCN-D surface observations. At the daily scale, 3B42RT performed poorly in capturing extreme daily precipitation (> 4 in. day-1) over the Pacific Northwest. Furthermore, the conditional analysis and a contingency analysis conducted illustrated the challenge in retrieving extreme precipitation from remote sensing estimates.

  9. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    Science.gov (United States)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of

  10. Errors and parameter estimation in precipitation-runoff modeling: 1. Theory

    Science.gov (United States)

    Troutman, Brent M.

    1985-01-01

    Errors in complex conceptual precipitation-runoff models may be analyzed by placing them into a statistical framework. This amounts to treating the errors as random variables and defining the probabilistic structure of the errors. By using such a framework, a large array of techniques, many of which have been presented in the statistical literature, becomes available to the modeler for quantifying and analyzing the various sources of error. A number of these techniques are reviewed in this paper, with special attention to the peculiarities of hydrologic models. Known methodologies for parameter estimation (calibration) are particularly applicable for obtaining physically meaningful estimates and for explaining how bias in runoff prediction caused by model error and input error may contribute to bias in parameter estimation.

  11. The impact of reflectivity correction and accounting for raindrop size distribution variability to improve precipitation estimation by weather radar for an extreme low-land mesoscale convective system

    Science.gov (United States)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-11-01

    disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  12. Quantitative Estimates of Bio-Remodeling on Coastal Rock Surfaces

    Directory of Open Access Journals (Sweden)

    Marta Pappalardo

    2016-05-01

    Full Text Available Remodeling of rocky coasts and erosion rates have been widely studied in past years, but not all the involved processes acting over rocks surface have been quantitatively evaluated yet. The first goal of this paper is to revise the different methodologies employed in the quantification of the effect of biotic agents on rocks exposed to coastal morphologic agents, comparing their efficiency. Secondly, we focus on geological methods to assess and quantify bio-remodeling, presenting some case studies in an area of the Mediterranean Sea in which different geological methods, inspired from the revised literature, have been tested in order to provide a quantitative assessment of the effects some biological covers exert over rocky platforms in tidal and supra-tidal environments. In particular, different experimental designs based on Schmidt hammer test results have been applied in order to estimate rock hardness related to different orders of littoral platforms and the bio-erosive/bio-protective role of Chthamalus ssp. and Verrucariaadriatica. All data collected have been analyzed using statistical tests to evaluate the significance of the measures and methodologies. The effectiveness of this approach is analyzed, and its limits are highlighted. In order to overcome the latter, a strategy combining geological and experimental–computational approaches is proposed, potentially capable of revealing novel clues on bio-erosion dynamics. An experimental-computational proposal, to assess the indirect effects of the biofilm coverage of rocky shores, is presented in this paper, focusing on the shear forces exerted during hydration-dehydration cycles. The results of computational modeling can be compared to experimental evidence, from nanoscopic to macroscopic scales.

  13. Global estimate of lichen and bryophyte contributions to forest precipitation interception

    Science.gov (United States)

    Van Stan, John; Porada, Philipp; Kleidon, Axel

    2017-04-01

    Interception of precipitation by forest canopies plays an important role in its partitioning to evaporation, transpiration and runoff. Field observations show arboreal lichens and bryophytes can substantially enhance forests' precipitation storage and evaporation. However, representations of canopy interception in global land surface models currently ignore arboreal lichen and bryophyte contributions. This study uses the lichen and bryophyte model (LiBry) to provide the first process-based modelling approach estimating these organisms' contributions to canopy water storage and evaporation. The global mean value of forest water storage capacity increased significantly from 0.87 mm to 1.33 mm by the inclusion of arboreal poikilohydric organisms. Global forest canopy evaporation of intercepted precipitation was also greatly enhanced by 44%. Ratio of total versus bare canopy global evaporation exceeded 2 in many forested regions. This altered global patterns in canopy water storage, evaporation, and ultimately the proportion of rainfall evaporated. A sensitivity analysis was also performed. Results indicate rainfall interception is of larger magnitude than previously reported by global land surface modelling work because of the important role of lichen and bryophytes in rainfall interception.

  14. Quantitative diagnosis of moisture sources and transport pathways for summer precipitation over the mid-lower Yangtze River Basin

    Science.gov (United States)

    Wang, Ning; Zeng, Xin-Min; Guo, Wei-Dong; Chen, Chaohui; You, Wei; Zheng, Yiqun; Zhu, Jian

    2018-04-01

    Using a moisture tracking model with 32-year reanalysis data and station precipitation observations, we diagnosed the sources of moisture for summer (June 1-August 31) precipitation in mid-lower reaches of the Yangtze River Basin (YRB). Results indicate the dominant role of oceanic evaporation compared to terrestrial evapotranspiration, and the previously overlooked southern Indian Ocean, as a source region, is found to contribute more moisture than the well-known Arabian Sea or Bay of Bengal. Terrestrial evapotranspiration appears to be important for summer precipitation, especially in early June when moisture contribution is more than 50%. The terrestrial contribution then decreases and is generally less than 40% after late June. The Indian Ocean is the most important oceanic source before mid-July, with its largest contribution during the period of heavy precipitation, while the Pacific Ocean becomes the more important oceanic source after mid-July. To quantitatively analyze paths of moisture transport to YRB, we proposed the Trajectory Frequency Method. The most intense branch of water vapor transport to YRB stretches from the Arabian Sea through the Bay of Bengal, the Indochina Peninsula, the South China Sea, and South China. The other main transport branches are westerly moisture fluxes to the south of the Tibetan Plateau, cross-equatorial flows north of Australia, and separate branches located in the north and equatorial Pacific Ocean. Significant intraseasonal variability for these branches is presented. Additionally, the importance of the South China Sea for moisture transport to YRB, especially from the sea areas, is emphasized.

  15. Sensitivity of quantitative precipitation forecasts to boundary layer parameterization: a flash flood case study in the Western Mediterranean

    Directory of Open Access Journals (Sweden)

    M. Zampieri

    2005-01-01

    Full Text Available The 'Montserrat-2000' severe flash flood event which occurred over Catalonia on 9 and 10 June 2000 is analyzed. Strong precipitation was generated by a mesoscale convective system associated with the development of a cyclone. The location of heavy precipitation depends on the position of the cyclone, which, in turn, is found to be very sensitive to various model characteristics and initial conditions. Numerical simulations of this case study using the hydrostatic BOLAM and the non-hydrostatic MOLOCH models are performed in order to test the effects of different formulations of the boundary layer parameterization: a modified version of the Louis (order 1 model and a custom version of the E-ℓ (order 1.5 model. Both of them require a diagnostic formulation of the mixing length, but the use of the turbulent kinetic energy equation in the E-ℓ model allows to represent turbulence history and non-locality effects and to formulate a more physically based mixing length. The impact of the two schemes is different in the two models. The hydrostatic model, run at 1/5 degree resolution, is less sensitive, but the quantitative precipitation forecast is in any case unsatisfactory in terms of localization and amount. Conversely, the non-hydrostatic model, run at 1/50 degree resolution, is capable of realistically simulate timing, position and amount of precipitation, with the apparently superior results obtained with the E-ℓ parameterization model.

  16. Novel whole brain segmentation and volume estimation using quantitative MRI

    Energy Technology Data Exchange (ETDEWEB)

    West, J. [Linkoeping University, Radiation Physics, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); SyntheticMR AB, Linkoeping (Sweden); Warntjes, J.B.M. [Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); SyntheticMR AB, Linkoeping (Sweden); Linkoeping University and Department of Clinical Physiology UHL, County Council of Oestergoetland, Clinical Physiology, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Lundberg, P. [Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); Linkoeping University and Department of Radiation Physics UHL, County Council of Oestergoetland, Radiation Physics, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Linkoeping University and Department of Radiology UHL, County Council of Oestergoetland, Radiology, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden)

    2012-05-15

    Brain segmentation and volume estimation of grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) are important for many neurological applications. Volumetric changes are observed in multiple sclerosis (MS), Alzheimer's disease and dementia, and in normal aging. A novel method is presented to segment brain tissue based on quantitative magnetic resonance imaging (qMRI) of the longitudinal relaxation rate R{sub 1}, the transverse relaxation rate R{sub 2} and the proton density, PD. Previously reported qMRI values for WM, GM and CSF were used to define tissues and a Bloch simulation performed to investigate R{sub 1}, R{sub 2} and PD for tissue mixtures in the presence of noise. Based on the simulations a lookup grid was constructed to relate tissue partial volume to the R{sub 1}-R{sub 2}-PD space. The method was validated in 10 healthy subjects. MRI data were acquired using six resolutions and three geometries. Repeatability for different resolutions was 3.2% for WM, 3.2% for GM, 1.0% for CSF and 2.2% for total brain volume. Repeatability for different geometries was 8.5% for WM, 9.4% for GM, 2.4% for CSF and 2.4% for total brain volume. We propose a new robust qMRI-based approach which we demonstrate in a patient with MS. (orig.)

  17. Novel whole brain segmentation and volume estimation using quantitative MRI

    International Nuclear Information System (INIS)

    West, J.; Warntjes, J.B.M.; Lundberg, P.

    2012-01-01

    Brain segmentation and volume estimation of grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) are important for many neurological applications. Volumetric changes are observed in multiple sclerosis (MS), Alzheimer's disease and dementia, and in normal aging. A novel method is presented to segment brain tissue based on quantitative magnetic resonance imaging (qMRI) of the longitudinal relaxation rate R 1 , the transverse relaxation rate R 2 and the proton density, PD. Previously reported qMRI values for WM, GM and CSF were used to define tissues and a Bloch simulation performed to investigate R 1 , R 2 and PD for tissue mixtures in the presence of noise. Based on the simulations a lookup grid was constructed to relate tissue partial volume to the R 1 -R 2 -PD space. The method was validated in 10 healthy subjects. MRI data were acquired using six resolutions and three geometries. Repeatability for different resolutions was 3.2% for WM, 3.2% for GM, 1.0% for CSF and 2.2% for total brain volume. Repeatability for different geometries was 8.5% for WM, 9.4% for GM, 2.4% for CSF and 2.4% for total brain volume. We propose a new robust qMRI-based approach which we demonstrate in a patient with MS. (orig.)

  18. Precipitation estimates and comparison of satellite rainfall data to in situ rain gauge observations to further develop the watershed-modeling capabilities for the Lower Mekong River Basin

    Science.gov (United States)

    Dandridge, C.; Lakshmi, V.; Sutton, J. R. P.; Bolten, J. D.

    2017-12-01

    This study focuses on the lower region of the Mekong River Basin (MRB), an area including Burma, Cambodia, Vietnam, Laos, and Thailand. This region is home to expansive agriculture that relies heavily on annual precipitation over the basin for its prosperity. Annual precipitation amounts are regulated by the global monsoon system and therefore vary throughout the year. This research will lead to improved prediction of floods and management of floodwaters for the MRB. We compare different satellite estimates of precipitation to each other and to in-situ precipitation estimates for the Mekong River Basin. These comparisons will help us determine which satellite precipitation estimates are better at predicting precipitation in the MRB and will help further our understanding of watershed-modeling capabilities for the basin. In this study we use: 1) NOAA's PERSIANN daily 0.25° precipitation estimate Climate Data Record (CDR), 2) NASA's Tropical Rainfall Measuring Mission (TRMM) daily 0.25° estimate, and 3) NASA's Global Precipitation Measurement (GPM) daily 0.1 estimate and 4) 488 in-situ stations located in the lower MRB provide daily precipitation estimates. The PERSIANN CDR precipitation estimate was able to provide the longest data record because it is available from 1983 to present. The TRMM precipitation estimate is available from 2000 to present and the GPM precipitation estimates are available from 2015 to present. It is for this reason that we provide several comparisons between our precipitation estimates. Comparisons were done between each satellite product and the in-situ precipitation estimates based on geographical location and date using the entire available data record for each satellite product for daily, monthly, and yearly precipitation estimates. We found that monthly PERSIANN precipitation estimates were able to explain up to 90% of the variability in station precipitation depending on station location.

  19. Extreme Precipitation Estimation with Typhoon Morakot Using Frequency and Spatial Analysis

    Directory of Open Access Journals (Sweden)

    Hone-Jay Chu

    2011-01-01

    Full Text Available Typhoon Morakot lashed Taiwan and produced copious amounts of precipitation in 2009. From the point view of hydrological statistics, the impact of the precipitation from typhoon Morakot using a frequency analysis can be analyzed and discussed. The frequency curve, which was fitted mathematically to historical observed data, can be used to estimate the probability of exceedance for runoff events of a certain magnitude. The study integrates frequency analysis and spatial analysis to assess the effect of Typhoon Morakot event on rainfall frequency in the Gaoping River basin of southern Taiwan. First, extreme rainfall data are collected at sixteen stations for durations of 1, 3, 6, 12, and 24 hours and then an appropriate probability distribution was selected to analyze the impact of the extreme hydrological event. Spatial rainfall patterns for a return period of 200-yr with 24-hr duration with and without Typhoon Morakot are estimated. Results show that the rainfall amount is significantly different with long duration with and without the event for frequency analysis. Furthermore, spatial analysis shows that extreme rainfall for a return period of 200-yr is highly dependent on topography and is smaller in the southwest than that in the east. The results not only demonstrate the distinct effect of Typhoon Morakot on frequency analysis, but also could provide reference in future planning of hydrological engineering.

  20. Qualitative and Quantitative Analysis of Heparin during Precipitation by Near-Infrared Spectroscopy

    OpenAIRE

    Lian Li; Jinfeng Wang; Hengchang Zang; Hui Zhang; Wei Jiang; Shang Chen; Fengshan Wang

    2016-01-01

    Heparin is a glycosaminoglycan (GAG) that plays an important role in the blood coagulation system. Its quality is of great importance, so it is necessary to develop a fast analytical method during the manufacture process to analyse the quality of heparin produced. In this study, the heparin contents of 80 samples collected from five batches during the precipitation process were analysed using nearinfrared (NIR) spectroscopy and a chemometrics approach. This was done in order to improve the ef...

  1. Quantitative Analysis on Carbide Precipitation in V-Ti Microalloyed TRIP Steel Containing Aluminum

    Directory of Open Access Journals (Sweden)

    Fu Shiyu

    2016-01-01

    Full Text Available Introducing fine precipitates is an important way to enhance the properties of transformation-induced plasticity (TRIP steels. In present work, two V-Ti microalloyed TRIP steels containing aluminum with different content were compared. The average size, size distribution and numbers of vanadium-titanium carbides in samples cold rolled, quenched after being held at 800°C and quenched after intercritical annealing at 800°C and being held at bainitic isothermal transformation temperature of 400°C were investigated by using the technique of carbon extraction replica, twin jet chemical polishing thinning and transmission electron microscopy. The carbides were identified to be (Ti,VC precipitates in steel A and VC in steel B respectively, precipitated mainly from ferrites grains. The average equivalent radius was 3~6nm. Comparison of the experimental results in A and B steel revealed low carbon diffusion rate caused by aluminum inhibited the coarsening of vanadium-titanium carbides. The experimental results also showed that VC carbides dissolution occurred during the intercritical annealing at 800°C.

  2. Depth-area-duration characteristics of storm rainfall in Texas using Multi-Sensor Precipitation Estimates

    Science.gov (United States)

    McEnery, J. A.; Jitkajornwanich, K.

    2012-12-01

    This presentation will describe the methodology and overall system development by which a benchmark dataset of precipitation information has been used to characterize the depth-area-duration relations in heavy rain storms occurring over regions of Texas. Over the past two years project investigators along with the National Weather Service (NWS) West Gulf River Forecast Center (WGRFC) have developed and operated a gateway data system to ingest, store, and disseminate NWS multi-sensor precipitation estimates (MPE). As a pilot project of the Integrated Water Resources Science and Services (IWRSS) initiative, this testbed uses a Standard Query Language (SQL) server to maintain a full archive of current and historic MPE values within the WGRFC service area. These time series values are made available for public access as web services in the standard WaterML format. Having this volume of information maintained in a comprehensive database now allows the use of relational analysis capabilities within SQL to leverage these multi-sensor precipitation values and produce a valuable derivative product. The area of focus for this study is North Texas and will utilize values that originated from the West Gulf River Forecast Center (WGRFC); one of three River Forecast Centers currently represented in the holdings of this data system. Over the past two decades, NEXRAD radar has dramatically improved the ability to record rainfall. The resulting hourly MPE values, distributed over an approximate 4 km by 4 km grid, are considered by the NWS to be the "best estimate" of rainfall. The data server provides an accepted standard interface for internet access to the largest time-series dataset of NEXRAD based MPE values ever assembled. An automated script has been written to search and extract storms over the 18 year period of record from the contents of this massive historical precipitation database. Not only can it extract site-specific storms, but also duration-specific storms and

  3. Radar rainfall estimation for the identification of debris-flow precipitation thresholds

    Science.gov (United States)

    Marra, Francesco; Nikolopoulos, Efthymios I.; Creutin, Jean-Dominique; Borga, Marco

    2014-05-01

    Identification of rainfall thresholds for the prediction of debris-flow occurrence is a common approach for warning procedures. Traditionally the debris-flow triggering rainfall is derived from the closest available raingauge. However, the spatial and temporal variability of intense rainfall on mountainous areas, where debris flows take place, may lead to large uncertainty in point-based estimates. Nikolopoulos et al. (2014) have shown that this uncertainty translates into a systematic underestimation of the rainfall thresholds, leading to a step degradation of the performances of the rainfall threshold for identification of debris flows occurrence under operational conditions. A potential solution to this limitation lies on use of rainfall estimates from weather radar. Thanks to their high spatial and temporal resolutions, these estimates offer the advantage of providing rainfall information over the actual debris flow location. The aim of this study is to analyze the value of radar precipitation estimations for the identification of debris flow precipitation thresholds. Seven rainfall events that triggered debris flows in the Adige river basin (Eastern Italian Alps) are analyzed using data from a dense raingauge network and a C-Band weather radar. Radar data are elaborated by using a set of correction algorithms specifically developed for weather radar rainfall application in mountainous areas. Rainfall thresholds for the triggering of debris flows are identified in the form of average intensity-duration power law curves using a frequentist approach by using both radar rainfall estimates and raingauge data. Sampling uncertainty associated to the derivation of the thresholds is assessed by using a bootstrap technique (Peruccacci et al. 2012). Results show that radar-based rainfall thresholds are largely exceeding those obtained by using raingauge data. Moreover, the differences between the two thresholds may be related to the spatial characteristics (i.e., spatial

  4. Quantitative estimates of the volatility of ambient organic aerosol

    Directory of Open Access Journals (Sweden)

    C. D. Cappa

    2010-06-01

    Full Text Available Measurements of the sensitivity of organic aerosol (OA, and its components mass to changes in temperature were recently reported by Huffman et al.~(2009 using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets" are determined using several assumptions as to the enthalpy of vaporization (ΔHvap. We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions; on the order of 50–80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol and lowest for the high (ΔHvap = 150 kJ/mol assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009 has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the

  5. Quantitative estimates of the volatility of ambient organic aerosol

    Science.gov (United States)

    Cappa, C. D.; Jimenez, J. L.

    2010-06-01

    Measurements of the sensitivity of organic aerosol (OA, and its components) mass to changes in temperature were recently reported by Huffman et al.~(2009) using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS) system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets") are determined using several assumptions as to the enthalpy of vaporization (ΔHvap). We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions; on the order of 50-80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol) and lowest for the high (ΔHvap = 150 kJ/mol) assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009) has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the high and variable ΔHvap assumptions. Our results also show that the amount of semivolatile gas-phase organics in equilibrium with the OA could range from ~20

  6. Quantitative examination of carbide and sulphide precipitates in chemically complex steels processed by direct strip casting

    Energy Technology Data Exchange (ETDEWEB)

    Dorin, Thomas, E-mail: thomas.dorin@deakin.edu.au [Deakin University, Pigdons Road, Geelong, Victoria, 3216 (Australia); Wood, Kathleen [Australian Nuclear Science and Technology Organisation, Bragg Institute, New South Wales, 2234, Menai (Australia); Taylor, Adam; Hodgson, Peter; Stanford, Nicole [Deakin University, Pigdons Road, Geelong, Victoria, 3216 (Australia)

    2016-02-15

    A high strength low alloy steel composition has been melted and processed by two different routes: simulated direct strip casting and slow cooled ingot casting. The microstructures were examined with scanning and transmission electron microscopy, atom probe tomography and small angle neutron scattering (SANS). The formation of cementite (Fe{sub 3}C), manganese sulphides (MnS) and niobium carbo-nitrides (Nb(C,N)) was investigated in both casting conditions. The sulphides were found to be significantly refined by the higher cooling rate, and developed an average diameter of only 100 nm for the fast cooled sample, and a diameter too large to be measured with SANS in the slow cooled condition (> 1.1 μm). Slow cooling resulted in the development of classical Nb(C,N) precipitation, with an average diameter of 7.2 nm. However, after rapid cooling both the SANS and atom probe tomography data indicated that the Nb was retained in the matrix as a random solid solution. There was also some evidence that O, N and S are also retained in solid solution in levels not found during conventional processing. - Highlights: • The influence of cooling rate on microstructure is investigated in a HSLA steel. • SANS, TEM and APT are used to characterise the sulphides and Nb(C,N) precipitates. • The slow cooling rate result in the formation of Nb(C,N) precipitates. • The fast cooling rate results in a microstructure supersaturated in Nb, C and N. • The sulphides are 100 nm in the fast cooled sample and > 1 μm in the slow cooled one.

  7. Estimating and forecasting the precipitable water vapor from GOES satellite data at high altitude sites

    Science.gov (United States)

    Marín, Julio C.; Pozo, Diana; Curé, Michel

    2015-01-01

    In this work, we describe a method to estimate the precipitable water vapor (PWV) from Geostationary Observational Environmental Satellite (GOES) data at high altitude sites. The method was applied at Atacama Pathfinder Experiment (APEX) and Cerro Toco sites, located above 5000 m altitude in the Chajnantor plateau, in the north of Chile. It was validated using GOES-12 satellite data over the range 0-1.2 mm since submillimeter/millimeter astronomical observations are only useful within this PWV range. The PWV estimated from GOES and the Final Analyses (FNL) at APEX for 2007 and 2009 show root mean square error values of 0.23 mm and 0.36 mm over the ranges 0-0.4 mm and 0.4-1.2 mm, respectively. However, absolute relative errors of 51% and 33% were shown over these PWV ranges, respectively. We recommend using high-resolution thermodynamic profiles from the Global Forecast System (GFS) model to estimate the PWV from GOES data since they are available every three hours and at an earlier time than the FNL data. The estimated PWV from GOES/GFS agrees better with the observed PWV at both sites during night time. The largest errors are shown during daytime. Short-term PWV forecasts were implemented at both sites, applying a simple persistence method to the PWV estimated from GOES/GFS. The 12 h and 24 h PWV forecasts evaluated from August to October 2009 indicates that 25% of them show a very good agreement with observations whereas 50% of them show reasonably good agreement with observations. Transmission uncertainties calculated for PWV estimations and forecasts over the studied sites are larger over the range 0-0.4 mm than over the range 0.4-1.2 mm. Thus, the method can be used over the latter interval with more confidence.

  8. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    Science.gov (United States)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the

  9. A quantitative framework for estimating water resources in India

    Digital Repository Service at National Institute of Oceanography (India)

    Shankar, D.; Kotamraju, V.; Shetye, S.R

    of information on the variables associated with hydrology, and second, the absence of an easily accessible quantitative framework to put these variables in perspective. In this paper, we discuss a framework that has been assembled to address both these issues...

  10. Using damage data to estimate the risk from summer convective precipitation extremes

    Science.gov (United States)

    Schroeer, Katharina; Tye, Mari

    2017-04-01

    model to test whether the relationship between extreme rainfall events and damages is robust enough to estimate a potential underrepresentation of high intensity rainfall events in ungauged areas. Risk-relevant factors of socio-economic vulnerability, land cover, streamflow data, and weather type information are included to improve and sharpen the analysis. Within this study, we first aim to identify which rainfall events are most damaging and which factors affect the damages - seen as a proxy for the vulnerability - related to summer convective rainfall extremes in different catchment types. Secondly, we aim to detect potentially unreported damaging rainfall events and estimate the likelihood of such cases. We anticipate this damage perspective on summertime extreme convective precipitation to be beneficial for risk assessment, uncertainty management, and decision making with respect to weather and climate extremes on the regional-to-local level.

  11. Quantitative estimation of seafloor features from photographs and their application to nodule mining

    Digital Repository Service at National Institute of Oceanography (India)

    Sharma, R.

    Methods developed for quantitative estimation of seafloor features from seabed photographs and their application for estimation of nodule sizes, coverage, abundance, burial, sediment thickness, extent of rock exposure, density of benthic organisms...

  12. Atmospheric water vapor transport: Estimation of continental precipitation recycling and parameterization of a simple climate model. M.S. Thesis

    Science.gov (United States)

    Brubaker, Kaye L.; Entekhabi, Dara; Eagleson, Peter S.

    1991-01-01

    The advective transport of atmospheric water vapor and its role in global hydrology and the water balance of continental regions are discussed and explored. The data set consists of ten years of global wind and humidity observations interpolated onto a regular grid by objective analysis. Atmospheric water vapor fluxes across the boundaries of selected continental regions are displayed graphically. The water vapor flux data are used to investigate the sources of continental precipitation. The total amount of water that precipitates on large continental regions is supplied by two mechanisms: (1) advection from surrounding areas external to the region; and (2) evaporation and transpiration from the land surface recycling of precipitation over the continental area. The degree to which regional precipitation is supplied by recycled moisture is a potentially significant climate feedback mechanism and land surface-atmosphere interaction, which may contribute to the persistence and intensification of droughts. A simplified model of the atmospheric moisture over continents and simultaneous estimates of regional precipitation are employed to estimate, for several large continental regions, the fraction of precipitation that is locally derived. In a separate, but related, study estimates of ocean to land water vapor transport are used to parameterize an existing simple climate model, containing both land and ocean surfaces, that is intended to mimic the dynamics of continental climates.

  13. Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar

    Science.gov (United States)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by

  14. A uniform quantitative stiff stability estimate for BDF schemes

    Directory of Open Access Journals (Sweden)

    Winfried Auzinger

    2006-01-01

    Full Text Available The concepts of stability regions, \\(A\\- and \\(A(\\alpha\\-stability - albeit based on scalar models - turned out to be essential for the identification of implicit methods suitable for the integration of stiff ODEs. However, for multistep methods, knowledge of the stability region provides no information on the quantitative stability behavior of the scheme. In this paper we fill this gap for the important class of Backward Differentiation Formulas (BDF. Quantitative stability bounds are derived which are uniformly valid in the stability region of the method. Our analysis is based on a study of the separation of the characteristic roots and a special similarity decomposition of the associated companion matrix.

  15. Oxygen and Hydrogen Isotopes of Precipitation in a Rocky Mountainous Area of Beijing to Distinguish and Estimate Spring Recharge

    Directory of Open Access Journals (Sweden)

    Ziqiang Liu

    2018-05-01

    Full Text Available Stable isotopes of oxygen and hydrogen were used to estimate seasonal contributions of precipitation to natural spring recharge in Beijing’s mountainous area. Isotopic compositions were shown to be more positive in the dry season and more negative in the wet season, due to the seasonal patterns in the amount of precipitation. The local meteoric water line (LMWL was δ2H = 7.0 δ18O − 2.3 for the dry season and δ2H = 5.9 δ18O − 10.4 for the wet season. LMWL in the two seasons had a lower slope and intercept than the Global Meteoric Water Line (p < 0.01. The slope and intercept of the LMWL in the wet season were lower than that in the dry season because of the effect of precipitation amount during the wet season (p < 0.01. The mean precipitation effects of −15‰ and −2‰ per 100 mm change in the amount of precipitation for δ2H and δ18O, respectively, were obtained from the monthly total precipitation and its average isotopic value. The isotopic composition of precipitation decreased when precipitation duration increased. Little changes in the isotopic composition of the natural spring were found. By employing isotope conservation of mass, it could be derived that, on average, approximately 7.2% of the natural spring came from the dry season precipitation and the rest of 92.8% came from the wet season precipitation.

  16. Methodologies for quantitative systems pharmacology (QSP) models : Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, Hp; Agoram, B.; Davies, M.R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, Ph.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  17. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation

    NARCIS (Netherlands)

    Ribba, B.; Grimm, H. P.; Agoram, B.; Davies, M. R.; Gadkar, K.; Niederer, S.; van Riel, N.; Timmis, J.; van der Graaf, P. H.

    2017-01-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early

  18. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    Science.gov (United States)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential

  19. Impact of Precipitating Ice Hydrometeors on Longwave Radiative Effect Estimated by a Global Cloud-System Resolving Model

    Science.gov (United States)

    Chen, Ying-Wen; Seiki, Tatsuya; Kodama, Chihiro; Satoh, Masaki; Noda, Akira T.

    2018-02-01

    Satellite observation and general circulation model (GCM) studies suggest that precipitating ice makes nonnegligible contributions to the radiation balance of the Earth. However, in most GCMs, precipitating ice is diagnosed and its radiative effects are not taken into account. Here we examine the longwave radiative impact of precipitating ice using a global nonhydrostatic atmospheric model with a double-moment cloud microphysics scheme. An off-line radiation model is employed to determine cloud radiative effects according to the amount and altitude of each type of ice hydrometeor. Results show that the snow radiative effect reaches 2 W m-2 in the tropics, which is about half the value estimated by previous studies. This effect is strongly dependent on the vertical separation of ice categories and is partially generated by differences in terminal velocities, which are not represented in GCMs with diagnostic precipitating ice. Results from sensitivity experiments that artificially change the categories and altitudes of precipitating ice show that the simulated longwave heating profile and longwave radiation field are sensitive to the treatment of precipitating ice in models. This study emphasizes the importance of incorporating appropriate treatments for the radiative effects of precipitating ice in cloud and radiation schemes in GCMs in order to capture the cloud radiative effects of upper level clouds.

  20. ESTIMATION OF PHASE DELAY DUE TO PRECIPITABLE WATER FOR DINSARBASED LAND DEFORMATION MONITORING

    Directory of Open Access Journals (Sweden)

    J. Susaki

    2017-09-01

    Full Text Available In this paper, we present a method for using the estimated precipitable water (PW to mitigate atmospheric phase delay in order to improve the accuracy of land-deformation assessment with differential interferometric synthetic aperture radar (DInSAR. The phase difference obtained from multi-temporal synthetic aperture radar images contains errors of several types, and the atmospheric phase delay can be an obstacle to estimating surface subsidence. In this study, we calculate PW from external meteorological data. Firstly, we interpolate the data with regard to their spatial and temporal resolutions. Then, assuming a range direction between a target pixel and the sensor, we derive the cumulative amount of differential PW at the height of the slant range vector at pixels along that direction. The atmospheric phase delay of each interferogram is acquired by taking a residual after a preliminary determination of the linear deformation velocity and digital elevation model (DEM error, and by applying high-pass temporal and low-pass spatial filters. Next, we estimate a regression model that connects the cumulative amount of PW and the atmospheric phase delay. Finally, we subtract the contribution of the atmospheric phase delay from the phase difference of the interferogram, and determine the linear deformation velocity and DEM error. The experimental results show a consistent relationship between the cumulative amount of differential PW and the atmospheric phase delay. An improvement in land-deformation accuracy is observed at a point at which the deformation is relatively large. Although further investigation is necessary, we conclude at this stage that the proposed approach has the potential to improve the accuracy of the DInSAR technique.

  1. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  2. Quantitative transmission electron microscopy and atom probe tomography study of Ag-dependent precipitation of Ω phase in Al-Cu-Mg alloys

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Song; Ying, Puyou [Key Laboratory of Nonferrous Metal Materials Science and Engineering, Ministry of Education, Central South University, Changsha 410083 (China); School of Material Science and Engineering, Central South University, Changsha 410083 (China); Liu, Zhiyi, E-mail: liuzhiyi@csu.edu.cn [Key Laboratory of Nonferrous Metal Materials Science and Engineering, Ministry of Education, Central South University, Changsha 410083 (China); School of Material Science and Engineering, Central South University, Changsha 410083 (China); Wang, Jian; Li, Junlin [Key Laboratory of Nonferrous Metal Materials Science and Engineering, Ministry of Education, Central South University, Changsha 410083 (China); School of Material Science and Engineering, Central South University, Changsha 410083 (China)

    2017-02-27

    The close association between the Ω precipitation and various Ag additions is systematically investigated by quantitative transmission electron microscopy and atom probe tomography analysis. Our results suggest that the precipitation of Ω phase is strongly dependent on Ag variations. Increasing the bulk Ag content favors a denser Ω precipitation and hence leads to a greater age-hardening response of Al-Cu-Mg-Ag alloy. This phenomenon, as proposed by proximity histograms, is directly related to the greater abundance of Ag solutes within Ω precursors. This feature lowers its nucleation barrier and increases the nucleation rate of Ω phase, finally contributes to the enhanced Ω precipitation. Also, it is noted that increasing Ag remarkably restricts the precipitation of θ' phase.

  3. Quantitative reconstruction of precipitation and runoff during MIS 5a, MIS 3a, and Holocene, arid China

    Science.gov (United States)

    Liu, Yuan; Li, Yu

    2017-11-01

    Marine oxygen isotope stage 5a (MIS 5a), MIS 3a, and Holocene were highlighted periods in paleoclimate studies. Many scientists have published a great number of studies in this regard, but they paid more attention to qualitative research, and there was often a lack of quantitative data. In this paper, based on chronological evidence from a paleolake in arid China, MIS 5a, MIS 3a, and Holocene lake area, the precipitation of the drainage area and the runoff of the inflowing rivers of the lake were reconstructed with ArcGIS spatial analysis software and the improved water and energy balance model which was calibrated by modern meteorological and hydrological data in the Shiyang River drainage basin. The results showed that the paleolake areas were 1824, 1124, and 628 km2 for MIS 5a, MIS 3a, and Holocene; meanwhile, the paleoprecipitation and runoff were 293.992-297.433, 271.105-274.294, and 249.431-252.373 mm and 29.103 × 108-29.496 × 108, 18.810 × 108-18.959 × 108, and 10.637 × 108-10.777 × 108 mm, respectively. The quantitative data can help us not only strengthen the understanding of paleoclimatic characteristics but also recognize the complexity and diversity of the climate system.

  4. Introducing uncertainty of radar-rainfall estimates to the verification of mesoscale model precipitation forecasts

    Directory of Open Access Journals (Sweden)

    M. P. Mittermaier

    2008-05-01

    Full Text Available A simple measure of the uncertainty associated with using radar-derived rainfall estimates as "truth" has been introduced to the Numerical Weather Prediction (NWP verification process to assess the effect on forecast skill and errors. Deterministic precipitation forecasts from the mesoscale version of the UK Met Office Unified Model for a two-day high-impact event and for a month were verified at the daily and six-hourly time scale using a spatially-based intensity-scale method and various traditional skill scores such as the Equitable Threat Score (ETS and log-odds ratio. Radar-rainfall accumulations from the UK Nimrod radar-composite were used.

    The results show that the inclusion of uncertainty has some effect, shifting the forecast errors and skill. The study also allowed for the comparison of results from the intensity-scale method and traditional skill scores. It showed that the two methods complement each other, one detailing the scale and rainfall accumulation thresholds where the errors occur, the other showing how skillful the forecast is. It was also found that for the six-hourly forecasts the error distributions remain similar with forecast lead time but skill decreases. This highlights the difference between forecast error and forecast skill, and that they are not necessarily the same.

  5. Quantitative hyperbolicity estimates in one-dimensional dynamics

    International Nuclear Information System (INIS)

    Day, S; Kokubu, H; Pilarczyk, P; Luzzatto, S; Mischaikow, K; Oka, H

    2008-01-01

    We develop a rigorous computational method for estimating the Lyapunov exponents in uniformly expanding regions of the phase space for one-dimensional maps. Our method uses rigorous numerics and graph algorithms to provide results that are mathematically meaningful and can be achieved in an efficient way

  6. Qualitative and quantitative cost estimation : a methodology analysis

    NARCIS (Netherlands)

    Aram, S.; Eastman, C.; Beetz, J.; Issa, R.; Flood, I.

    2014-01-01

    This paper reports on the first part of ongoing research with the goal of designing a framework and a knowledge-based system for 3D parametric model-based quantity take-off and cost estimation in the Architecture, Engineering and Construction (AEC) industry. The authors have studied and analyzed

  7. On-line estimation of the dissolved zinc concentration during ZnS precipitation in a CSTR

    NARCIS (Netherlands)

    Grootscholten, T.I.M.; Keesman, K.J.; Lens, P.N.L.

    2007-01-01

    Abstract In this paper a method is presented to estimate the reaction term of zinc sulphide precipitation and the zinc concentration in a CSTR, using the read-out signal of a sulphide selective electrode. The reaction between zinc and sulphide is described by a non-linear model and therefore

  8. Where Does the Irrigation Water Go? An Estimate of the Contribution of Irrigation to Precipitation Using MERRA

    Science.gov (United States)

    Wei, Jiangfeng; Dirmeyer, Paul A.; Wisser, Dominik; Bosilovich, Michael G.; Mocko, David M.

    2013-01-01

    Irrigation is an important human activity that may impact local and regional climate, but current climate model simulations and data assimilation systems generally do not explicitly include it. The European Centre for Medium-Range Weather Forecasts (ECMWF) Interim Re-Analysis (ERA-Interim) shows more irrigation signal in surface evapotranspiration (ET) than the Modern-Era Retrospective Analysis for Research and Applications (MERRA) because ERA-Interim adjusts soil moisture according to the observed surface temperature and humidity while MERRA has no explicit consideration of irrigation at the surface. But, when compared with the results from a hydrological model with detailed considerations of agriculture, the ET from both reanalyses show large deficiencies in capturing the impact of irrigation. Here, a back-trajectory method is used to estimate the contribution of irrigation to precipitation over local and surrounding regions, using MERRA with observation-based corrections and added irrigation-caused ET increase from the hydrological model. Results show substantial contributions of irrigation to precipitation over heavily irrigated regions in Asia, but the precipitation increase is much less than the ET increase over most areas, indicating that irrigation could lead to water deficits over these regions. For the same increase in ET, precipitation increases are larger over wetter areas where convection is more easily triggered, but the percentage increase in precipitation is similar for different areas. There are substantial regional differences in the patterns of irrigation impact, but, for all the studied regions, the highest percentage contribution to precipitation is over local land.

  9. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models

    DEFF Research Database (Denmark)

    Borup, Morten; Mikkelsen, Peter Steen; Borup, Morten

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due...

  10. Modelling and on-line estimation of zinc sulphide precipitation in

    NARCIS (Netherlands)

    Grootscholten, T.I.M.; Keesman, K.J.; Lens, P.N.L.

    2008-01-01

    In this paper the ZnS precipitation in a continuously stirred tank reactor (CSTR) is modelled using mass balances. The dynamics analysis of the model reveals that the ZnS precipitation shows a two time-scales behaviour with inherent numerical stability problems, which therefore needs special

  11. Quantitative genetic tools for insecticide resistance risk assessment: estimating the heritability of resistance

    Science.gov (United States)

    Michael J. Firko; Jane Leslie Hayes

    1990-01-01

    Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...

  12. The Effectiveness of Using Limited Gauge Measurements for Bias Adjustment of Satellite-Based Precipitation Estimation over Saudi Arabia

    Science.gov (United States)

    Alharbi, Raied; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2018-01-01

    Precipitation is a key input variable for hydrological and climate studies. Rain gauges are capable of providing reliable precipitation measurements at point scale. However, the uncertainty of rain measurements increases when the rain gauge network is sparse. Satellite -based precipitation estimations appear to be an alternative source of precipitation measurements, but they are influenced by systematic bias. In this study, a method for removing the bias from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) over a region where the rain gauge is sparse is investigated. The method consists of monthly empirical quantile mapping, climate classification, and inverse-weighted distance method. Daily PERSIANN-CCS is selected to test the capability of the method for removing the bias over Saudi Arabia during the period of 2010 to 2016. The first six years (2010 - 2015) are calibrated years and 2016 is used for validation. The results show that the yearly correlation coefficient was enhanced by 12%, the yearly mean bias was reduced by 93% during validated year. Root mean square error was reduced by 73% during validated year. The correlation coefficient, the mean bias, and the root mean square error show that the proposed method removes the bias on PERSIANN-CCS effectively that the method can be applied to other regions where the rain gauge network is sparse.

  13. Statistical evaluation of the performance of gridded monthly precipitation products from reanalysis data, satellite estimates, and merged analyses over China

    Science.gov (United States)

    Deng, Xueliang; Nie, Suping; Deng, Weitao; Cao, Weihua

    2018-04-01

    In this study, we compared the following four different gridded monthly precipitation products: the National Centers for Environmental Prediction version 2 (NCEP-2) reanalysis data, the satellite-based Climate Prediction Center Morphing technique (CMORPH) data, the merged satellite-gauge Global Precipitation Climatology Project (GPCP) data, and the merged satellite-gauge-model data from the Beijing Climate Center Merged Estimation of Precipitation (BMEP). We evaluated the performances of these products using monthly precipitation observations spanning the period of January 2003 to December 2013 from a dense, national, rain gauge network in China. Our assessment involved several statistical techniques, including spatial pattern, temporal variation, bias, root-mean-square error (RMSE), and correlation coefficient (CC) analysis. The results show that NCEP-2, GPCP, and BMEP generally overestimate monthly precipitation at the national scale and CMORPH underestimates it. However, all of the datasets successfully characterized the northwest to southeast increase in the monthly precipitation over China. Because they include precipitation gauge information from the Global Telecommunication System (GTS) network, GPCP and BMEP have much smaller biases, lower RMSEs, and higher CCs than NCEP-2 and CMORPH. When the seasonal and regional variations are considered, NCEP-2 has a larger error over southern China during the summer. CMORPH poorly reproduces the magnitude of the precipitation over southeastern China and the temporal correlation over western and northwestern China during all seasons. BMEP has a lower RMSE and higher CC than GPCP over eastern and southern China, where the station network is dense. In contrast, BMEP has a lower CC than GPCP over western and northwestern China, where the gauge network is relatively sparse.

  14. The APEX Quantitative Proteomics Tool: Generating protein quantitation estimates from LC-MS/MS proteomics results

    Directory of Open Access Journals (Sweden)

    Saeed Alexander I

    2008-12-01

    Full Text Available Abstract Background Mass spectrometry (MS based label-free protein quantitation has mainly focused on analysis of ion peak heights and peptide spectral counts. Most analyses of tandem mass spectrometry (MS/MS data begin with an enzymatic digestion of a complex protein mixture to generate smaller peptides that can be separated and identified by an MS/MS instrument. Peptide spectral counting techniques attempt to quantify protein abundance by counting the number of detected tryptic peptides and their corresponding MS spectra. However, spectral counting is confounded by the fact that peptide physicochemical properties severely affect MS detection resulting in each peptide having a different detection probability. Lu et al. (2007 described a modified spectral counting technique, Absolute Protein Expression (APEX, which improves on basic spectral counting methods by including a correction factor for each protein (called Oi value that accounts for variable peptide detection by MS techniques. The technique uses machine learning classification to derive peptide detection probabilities that are used to predict the number of tryptic peptides expected to be detected for one molecule of a particular protein (Oi. This predicted spectral count is compared to the protein's observed MS total spectral count during APEX computation of protein abundances. Results The APEX Quantitative Proteomics Tool, introduced here, is a free open source Java application that supports the APEX protein quantitation technique. The APEX tool uses data from standard tandem mass spectrometry proteomics experiments and provides computational support for APEX protein abundance quantitation through a set of graphical user interfaces that partition thparameter controls for the various processing tasks. The tool also provides a Z-score analysis for identification of significant differential protein expression, a utility to assess APEX classifier performance via cross validation, and a

  15. Quantitative measurement of precipitation using radar in comparison with ground-level measurements, taking orographic influences into account; Quantitative Niederschlagsmessung mit Radar im Vergleich mit Bodenmessungen in orographisch gegliedertem Gelaende

    Energy Technology Data Exchange (ETDEWEB)

    Gysi, H. [Radar-Info, Karlsruhe (Germany)

    1998-01-01

    The methods of correction applied to the determination of the spatial distribution of precipitation on the basis of the volumes established by the Karlsruhe C-band precipitation radar distinctly enhance the quality of statements regarding precipitation intensities and their time integration both in summer and winter. (orig./KW) [Deutsch] Die fuer die Bestimmung der raeumlichen Niederschlagsverteilung aus Volumendaten des Karlsruher C-Band Niederschlagradars angewandten Korrekturverfahren verbessern sowohl im Sommer als auch im Winter deutlich die Qualitaet und quantitative Aussagekraft der dargestellten Niederschlagsintensitaeten und deren zeitlichen Integrationen. (orig./KW)

  16. Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.

    Science.gov (United States)

    Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E

    2017-12-11

    Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.

  17. Spatial estimation of mean temperature and precipitation in areas of scarce meteorological information

    Energy Technology Data Exchange (ETDEWEB)

    Gomez, J.D. [Universidad Autonoma Chapingo, Chapingo (Mexico)]. E-mail: dgomez@correo.chapingo.mx; Etchevers, J.D. [Instituto de Recursos Naturales, Colegio de Postgraduados, Montecillo, Edo. de Mexico (Mexico); Monterroso, A.I. [departamento de Suelos, Universidad Autonoma Chapingo, Chapingo (Mexico); Gay, G. [Centro de Ciencias de la Atmosfera, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Campo, J. [Instituto de Ecologia, Universidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico); Martinez, M. [Instituto de Recursos Naturales, Montecillo, Edo. de Mexico (Mexico)

    2008-01-15

    In regions of complex relief and scarce meteorological information it becomes difficult to implement techniques and models of numerical interpolation to elaborate reliable maps of climatic variables essential for the study of natural resources using the new tools of the geographic information systems. This paper presents a method for estimating annual and monthly mean values of temperature and precipitation, taking elements from simple interpolation methods and complementing them with some characteristics of more sophisticated methods. To determine temperature, simple linear regression equations were generated associating temperature with altitude of weather stations in the study region, which had been previously subdivided in accordance with humidity conditions and then applying such equations to the area's digital elevation model to obtain temperatures. The estimation of precipitation was based on the graphic method through the analysis of the meteorological systems that affect the regions of the study area throughout the year and considering the influence of mountain ridges on the movement of prevailing winds. Weather stations with data in nearby regions were analyzed according to their position in the landscape, exposure to humid winds, and false color associated with vegetation types. Weather station sites were used to reference the amount of rainfall; interpolation was attained using analogies with satellite images of false color to which a model of digital elevation was incorporated to find similar conditions within the study area. [Spanish] En las regiones de relieve complejo y con escasa informacion meteorologica se dificulta la aplicacion de las diferentes tecnicas y modelos de interpolacion numericos para elaborar mapas de variables climaticas confiables, indispensables para realizar estudios de los recursos naturales, con la utilizacion de las nuevas herramientas de los sistemas de informacion geografica. En este trabajo se presenta un metodo para

  18. Estimating drizzle drop size and precipitation rate using two-colour lidar measurements

    Directory of Open Access Journals (Sweden)

    C. D. Westbrook

    2010-06-01

    Full Text Available A method to estimate the size and liquid water content of drizzle drops using lidar measurements at two wavelengths is described. The method exploits the differential absorption of infrared light by liquid water at 905 nm and 1.5 μm, which leads to a different backscatter cross section for water drops larger than ≈50 μm. The ratio of backscatter measured from drizzle samples below cloud base at these two wavelengths (the colour ratio provides a measure of the median volume drop diameter D0. This is a strong effect: for D0=200 μm, a colour ratio of ≈6 dB is predicted. Once D0 is known, the measured backscatter at 905 nm can be used to calculate the liquid water content (LWC and other moments of the drizzle drop distribution.

    The method is applied to observations of drizzle falling from stratocumulus and stratus clouds. High resolution (32 s, 36 m profiles of D0, LWC and precipitation rate R are derived. The main sources of error in the technique are the need to assume a value for the dispersion parameter μ in the drop size spectrum (leading to at most a 35% error in R and the influence of aerosol returns on the retrieval (≈10% error in R for the cases considered here. Radar reflectivities are also computed from the lidar data, and compared to independent measurements from a colocated cloud radar, offering independent validation of the derived drop size distributions.

  19. Ranking GCM Estimates of Twentieth Century Precipitation Seasonality in the Western U.S. and its Influence on Floristic Provinces.

    Science.gov (United States)

    Cole, K. L.; Eischeid, J. K.; Garfin, G. M.; Ironside, K.; Cobb, N. S.

    2008-12-01

    Floristic provinces of the western United States (west of 100W) can be segregated into three regions defined by significant seasonal precipitation during the months of: 1) November-March (Mediterranean); 2) July- September (Monsoonal); or, 3) May-June (Rocky Mountain). This third region is best defined by the absence of the late spring-early summer drought that affects regions 1 and 2. Each of these precipitation regimes is characterized by distinct vegetation types and fire seasonality adapted to that particular cycle of seasonal moisture availability and deficit. Further, areas where these regions blend from one to another can support even more complex seasonal patterns and resulting distinctive vegetation types. As a result, modeling the effects of climates on these ecosystems requires confidence that GCMs can at least approximate these sub- continental seasonal precipitation patterns. We evaluated the late Twentieth Century (1950-1999 AD) estimates of annual precipitation seasonality produced by 22 GCMs contained within the IPCC Fourth Assessment (AR4). These modeled estimates were compared to values from the PRISM dataset, extrapolated from station data, over the same historical period for the 3 seasonal periods defined above. The correlations between GCM estimates and PRISM values were ranked using 4 measures: 1) A map pattern relationship based on the correlation coefficient, 2) A map pattern relationship based on the congruence coefficient, 3) The ratio of simulated/observed area averaged precipitation based on the seasonal precipitation amounts, and, 4) The ratio of simulated/observed area averaged precipitation based on the seasonal precipitation percentages of the annual total. For each of the four metrics, the rank order of models was very similar. The ranked order of the performance of the different models quantified aspects of the model performance visible in the mapped results. While some models represented the seasonal patterns very well, others

  20. Long-Term Large-Scale Bias-Adjusted Precipitation Estimates at High Spatial and Temporal Resolution Derived from the National Mosaic and Multi-Sensor QPE (NMQ/Q2) Precipitation Reanalysis over CONUS

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Seo, D. J.; Kim, B.

    2014-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over Continental United States (CONUS) is nearly completed for the period covering from 2000 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Rain gauge networks such as the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), the Climate Reference Network (CRN), and the Global Historical Climatology Network - Daily (GHCN-D) are used to adjust for those biases and to merge with the radar only product to provide a multi-sensor estimate. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. After assessing the bias and applying reduction or elimination techniques, we are investigating the kriging method and its variants such as simple kriging (SK), ordinary kriging (OK), and conditional bias-penalized Kriging (CBPK) among others. In addition we hope to generate estimates of uncertainty for the gridded estimate. In this work the methodology is presented as well as a comparison between the radar-only product and the final multi-sensor QPE product. The comparison is performed at various time scales from the sub-hourly, to annual. In addition, comparisons over the same period with a suite of lower resolution QPEs derived from ground based radar

  1. Quantitative assessment of intermetallic phase precipitation in a super duplex stainless steel weld metal using automatic image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gregori, A. [AB Sandvik Steel, Sandviken (Sweden). R and D Centre; Nilsson, J.-O. [AB Sandvik Steel, R and D Centre, Sandviken (Sweden); Bonollo, F. [Univ. di Padova, DTGSI, Vicenza (Italy)

    1999-07-01

    The microstructure of weld metal of the type 25%Cr-10%Ni-4%Mo-0.28%N in both as-welded and isothermally heat treated (temperature range: 700-1050 C: time range: 10s-72h) conditions has been investigated. Multipass welding was performed in Ar+2%N{sub 2} atmosphere using GTAW. By means of the electron diffraction technique. {sigma}-phase and {chi}-phase were detected and investigated. {chi}-phase precipitated more readily than {sigma}-phase and was found to be a precursor to {sigma}-phase by providing suitable nucleation sites. Quantitative image analysis of ferrite and intermetallic phases was performed as well as manual point counting (ISO 9042). Automatic image analysis was found to be more accurate. The results were used to assess the TTT-diagram with respect to intermetallic phase formation. On the basis of these results a CCT-diagram was computed, considering the intermetallic phase formation described by an Avrami type equation and adopting the additivity rule. (orig.)

  2. Improving quantitative precipitation nowcasting with a local ensemble transform Kalman filter radar data assimilation system: observing system simulation experiments

    Directory of Open Access Journals (Sweden)

    Chih-Chien Tsai

    2014-03-01

    Full Text Available This study develops a Doppler radar data assimilation system, which couples the local ensemble transform Kalman filter with the Weather Research and Forecasting model. The benefits of this system to quantitative precipitation nowcasting (QPN are evaluated with observing system simulation experiments on Typhoon Morakot (2009, which brought record-breaking rainfall and extensive damage to central and southern Taiwan. The results indicate that the assimilation of radial velocity and reflectivity observations improves the three-dimensional winds and rain-mixing ratio most significantly because of the direct relations in the observation operator. The patterns of spiral rainbands become more consistent between different ensemble members after radar data assimilation. The rainfall intensity and distribution during the 6-hour deterministic nowcast are also improved, especially for the first 3 hours. The nowcasts with and without radar data assimilation have similar evolution trends driven by synoptic-scale conditions. Furthermore, we carry out a series of sensitivity experiments to develop proper assimilation strategies, in which a mixed localisation method is proposed for the first time and found to give further QPN improvement in this typhoon case.

  3. Quantitative TEM study of the precipitation microstructure in aluminium alloy Al(MgSiCu) 6056 T6

    International Nuclear Information System (INIS)

    Delmas, F.; Casanove, M.J.; Lours, P.; Couret, A.; Coujou, A.

    2004-01-01

    The precipitate microstructure in the last-generation aluminium alloy 6056 T6 [AlMgSiCu] is investigated using three complementary techniques of transmission electron microscopy (TEM) with a special focus on the density and volume fraction of strengthening particles. High-resolution TEM allows the identification of the precipitates and the measurement of the precipitate sizes to be performed. Conventional TEM is used to evaluate the number of precipitates in the investigated area as well as their distribution in the matrix. In situ TEM straining, via the analysis of the dislocation slip traces, permits to determine precisely the thickness and the volume of the foil in the region where the precipitates are analysed. Taking into account the shape and the dimensions of precipitates with respect to the foil thickness, a novel methodology for measuring the volume density and the volume fraction of precipitates is proposed

  4. Quantitative precipitation climatology over the Himalayas by using Precipitation Radar on Tropical Rainfall Measuring Mission (TRMM) and a dense network of rain-gauges

    Science.gov (United States)

    Yatagai, A.

    2010-09-01

    Quantified grid observation data at a reasonable resolution are indispensable for environmental monitoring as well as for predicting future change of mountain environment. However quantified datasets have not been available for the Himalayan region. Hence we evaluate climatological precipitation data around the Himalayas by using Precipitation Radar (PR) data acquired by the Tropical Rainfall Measuring Mission (TRMM) over 10 years of observation. To validate and adjust these patterns, we used a dense network of rain gauges collected by the Asian Precipitation—Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE Water Resources) project (http://www.chikyu.ac.jp/precip/). We used more than 2600 stations which have more than 10-year monthly precipitation over the Himalayan region (75E-105E, 20-36N) including country data of Nepal, Bangladesh, Bhutan, Pakistan, India, Myanmar, and China. The region we studied is so topographically complicated that horizontal patterns are not uniform. Therefore, every path data of PR2A25 (near-surface rain) was averaged in a 0.05-degree grid and a 10-year monthly average was computed (hereafter we call PR). On the other hand, for rain-gauge, we first computed cell averages if each 0.05-degree grid cell has 10 years observation or more. Here we refer to the 0.05-degree rain-gauge climatology data as RG data. On the basis of comparisons between the RG and PR composite values, we defined the parameters of the regressions to correct the monthly climatology value based on the rain gauge observations. Compared with the RG, the PR systematically underestimated precipitation by 28-38% in summer (July-September). Significant correlation between TRMM/PR and rain-gauge data was found for all months, but the correlation is relatively low in winter. The relationship is investigated for different elevation zones, and the PR was found to underestimate RG data in most zones, except for certain zones in

  5. Does GPM-based multi-satellite precipitation enhance rainfall estimates over Pakistan and Bolivia arid regions?

    Science.gov (United States)

    Hussain, Y.; Satgé, F.; Bonnet, M. P.; Pillco, R.; Molina, J.; Timouk, F.; Roig, H.; Martinez-Carvajal, H., Sr.; Gulraiz, A.

    2016-12-01

    Arid regions are sensitive to rainfall variations which are expressed in the form of flooding and droughts. Unfortunately, those regions are poorly monitored and high quality rainfall estimates are still needed. The Global Precipitation Measurement (GPM) mission released two new satellite rainfall products named Integrated Multisatellite Retrievals GPM (IMERG) and Global Satellite Mapping of Precipitation version 6 (GSMaP-v6) bringing the possibility of accurate rainfall monitoring over these countries. This study assessed both products at monthly scale over Pakistan considering dry and wet season over the 4 main climatic zones from 2014 to 2016. With similar climatic conditions, the Altiplano region of Bolivia is considered to quantify the influence of big lakes (Titicaca and Poopó) in rainfall estimates. For comparison, the widely used TRMM-Multisatellite Precipitation Analysis 3B43 (TMPA-3B43) version 7 is also involved in the analysis to observe the potential enhancement in rainfall estimate brought by GPM products. Rainfall estimates derived from 110 rain-gauges are used as reference to compare IMERG, GSMaP-v6 and TMPA-3B43 at the 0.1° and 0.25° spatial resolution. Over both regions, IMERG and GSMaP-v6 capture the spatial pattern of precipitation as well as TMPA-3B43. All products tend to over estimates rainfall over very arid regions. This feature is even more marked during dry season. However, during this season, both reference and estimated rainfall remain very low and do not impact seasonal water budget computation. On a general way, IMERG slightly outperforms TMPA-3B43 and GSMaP-v6 which provides the less accurate rainfall estimate. The TMPA-3B43 rainfall underestimation previously found over Lake Titicaca is still observed in IMERG estimates. However, GSMaP-v6 considerably decreases the underestimation providing the most accurate rainfall estimate over the lake. MOD11C3 Land Surface Temperature (LST) and ASTER Global Emissivity Dataset reveal strong

  6. New 2012 precipitation frequency estimation analysis for Alaska : musings on data used and the final product.

    Science.gov (United States)

    2013-06-01

    The major product of this study was a precipitation frequency atlas for the entire state of Alaska; this atlas is available at : http://dipper.nws.noaa.gov/hdsc/pfds/. The process of contributing to this study provided an opportunity to (1) evaluate ...

  7. A model for estimating understory vegetation response to fertilization and precipitation in loblolly pine plantations

    Science.gov (United States)

    Curtis L. VanderSchaaf; Ryan W. McKnight; Thomas R. Fox; H. Lee Allen

    2010-01-01

    A model form is presented, where the model contains regressors selected for inclusion based on biological rationale, to predict how fertilization, precipitation amounts, and overstory stand density affect understory vegetation biomass. Due to time, economic, and logistic constraints, datasets of large sample sizes generally do not exist for understory vegetation. Thus...

  8. Estimates of run off, evaporation and precipitation for the Bay of Bengal on seasonal basis

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Mean seasonal river discharge rates (R) of the major rivers along the east coast of India, Bangla Desh and Burma; evaporation rates (E) computed for 5 degrees lat-long. Squares from data on heat loss and mean yearly precipitation (P) values at 5...

  9. GPM SLH: Convective Latent Heating Estimated with GPM Dual-frequency Precipitation Radar Data

    Science.gov (United States)

    Takayabu, Y. N.; Hamada, A.; Yokoyama, C.; Ikuta, Y.; Shige, S.; Yamaji, M.; Kubota, T.

    2017-12-01

    Three dimensional diabatic heating distribution plays essential roles to determine large-scale circulation, as well as to generate mesoscale circulation associated with tropical convection (e.g. Hartmann et al., 1984; Houze et al. 1982). For mid-latitude systems also, diabatic heating contributes to generate PVs resulting in, for example, explosive intensifications of mid-lattitude storms (Boettcher and Wernli, 2011). Previously, with TRMM PR data, we developed a Spectral Latent Heating algorithm (SLH; Shige et al. 2004, etc.) for 36N-36S region. It was based on the spectral LH tables produced from a simulation utilizing the Goddard Cloud Ensemble Model forced with the TOGA-COARE data. With GPM DPR, the observation region is extended to 65N-65S. Here, we introduce a new version of SLH algorithm which is applicable also to the mid-latitude precipitation. A new global GPM SLH ver.5 product is released as one of NASA/JAXA GPM standard products on July 11, 2017. For GPM SLH mid-latitude algorithm, we employ the Japan Meteorological Agency (JMA)'s high resolution (horizontally 2km) Local Forecast Model (LFM) to construct the LUTs. With collaborations of JMA's forecast group, forecast data for 8 extratropical cyclone cases are collected and utilized. For mid-latitude precipitation, we have to deal with large temperature gradients and complex relationship between the freezing level and cloud base levels. LUTs are constructed for LH, Q1-QR, and Q2 (Yanai et al. 1973), for six different precipitation types: Convective and shallow stratiform LUTs are made against precipitation top heights. For deep stratiform and other precipitation, LUTs are made against maximum precipitation to handle the unknown cloud-bases. Finally, three-dimensional convective latent heating is retrieved, utilizing the LUTs and precipitation profile data from GPM 2AKu. We can confirm that retrieved LH looks very similar to simulated LH, for a consistency check. We also confirm a good continuities of

  10. Contributions of Precipitation and Soil Moisture Observations to the Skill of Soil Moisture Estimates in a Land Data Assimilation System

    Science.gov (United States)

    Reichle, Rolf H.; Liu, Qing; Bindlish, Rajat; Cosh, Michael H.; Crow, Wade T.; deJeu, Richard; DeLannoy, Gabrielle J. M.; Huffman, George J.; Jackson, Thomas J.

    2011-01-01

    The contributions of precipitation and soil moisture observations to the skill of soil moisture estimates from a land data assimilation system are assessed. Relative to baseline estimates from the Modern Era Retrospective-analysis for Research and Applications (MERRA), the study investigates soil moisture skill derived from (i) model forcing corrections based on large-scale, gauge- and satellite-based precipitation observations and (ii) assimilation of surface soil moisture retrievals from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E). Soil moisture skill is measured against in situ observations in the continental United States at 44 single-profile sites within the Soil Climate Analysis Network (SCAN) for which skillful AMSR-E retrievals are available and at four CalVal watersheds with high-quality distributed sensor networks that measure soil moisture at the scale of land model and satellite estimates. The average skill (in terms of the anomaly time series correlation coefficient R) of AMSR-E retrievals is R=0.39 versus SCAN and R=0.53 versus CalVal measurements. The skill of MERRA surface and root-zone soil moisture is R=0.42 and R=0.46, respectively, versus SCAN measurements, and MERRA surface moisture skill is R=0.56 versus CalVal measurements. Adding information from either precipitation observations or soil moisture retrievals increases surface soil moisture skill levels by IDDeltaR=0.06-0.08, and root zone soil moisture skill levels by DeltaR=0.05-0.07. Adding information from both sources increases surface soil moisture skill levels by DeltaR=0.13, and root zone soil moisture skill by DeltaR=0.11, demonstrating that precipitation corrections and assimilation of satellite soil moisture retrievals contribute similar and largely independent amounts of information.

  11. New method to estimate paleoprecipitation using fossil amphibians and reptiles and the middle and late Miocene precipitation gradients in Europe

    Science.gov (United States)

    Böhme, M.; Ilg, A.; Ossig, A.; Küchenhoff, H.

    2006-06-01

    Existing methods for determining paleoprecipitation are subject to large errors (±350 400 mm or more using mammalian proxies), or are restricted to wet climate systems due to their strong facies dependence (paleobotanical proxies). Here we describe a new paleoprecipitation tool based on an indexing of ecophysiological groups within herpetological communities. In recent communities these indices show a highly significant correlation to annual precipitation (r2 = 0.88), and yield paleoprecipitation estimates with average errors of ±250 280 mm. The approach was validated by comparison with published paleoprecipitation estimates from other methods. The method expands the application of paleoprecipitation tools to dry climate systems and in this way contributes to the establishment of a more comprehensive paleoprecipitation database. This method is applied to two high-resolution time intervals from the European Neogene: the early middle Miocene (early Langhian) and the early late Miocene (early Tortonian). The results indicate that both periods show significant meridional precipitation gradients in Europe, these being stronger in the early Langhian (threefold decrease toward the south) than in the early Tortonian (twofold decrease toward the south). This pattern indicates a strengthening of climatic belts during the middle Miocene climatic optimum due to Southern Hemisphere cooling and an increased contribution of Arctic low-pressure cells to the precipitation from the late Miocene onward due to Northern Hemisphere cooling.

  12. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    International Nuclear Information System (INIS)

    Bindschadler, Michael; Alessio, Adam M; Modgil, Dimple; La Riviere, Patrick J; Branch, Kelley R

    2014-01-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g) −1 , cardiac output = 3, 5, 8 L min −1 ). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This

  13. Dosing of low-activity strontium 90 in human bone ashes - A method based on the quantitative precipitation of strontium nitrate

    International Nuclear Information System (INIS)

    Patti, Francois; Bullier, Denise

    1969-02-01

    The specific separation of strontium nitrate in bone ash samples by red fuming nitric acid requires a succession of precipitation varying in number according to the weight of ashes. The interest of the technique is to define the experimental conditions required for a reproducible quantitative separation of strontium. The operating process tested on over 1.500 samples allowed to obtain chemical yields of about 90 per cent. (authors) [fr

  14. Quantitative estimation of muscle fatigue using surface electromyography during static muscle contraction.

    Science.gov (United States)

    Soo, Yewguan; Sugi, Masao; Nishino, Masataka; Yokoi, Hiroshi; Arai, Tamio; Kato, Ryu; Nakamura, Tatsuhiro; Ota, Jun

    2009-01-01

    Muscle fatigue is commonly associated with the musculoskeletal disorder problem. Previously, various techniques were proposed to index the muscle fatigue from electromyography signal. However, quantitative measurement is still difficult to achieve. This study aimed at proposing a method to estimate the degree of muscle fatigue quantitatively. A fatigue model was first constructed using handgrip dynamometer by conducting a series of static contraction tasks. Then the degree muscle fatigue can be estimated from electromyography signal with reasonable accuracy. The error of the estimated muscle fatigue was less than 10% MVC and no significant difference was found between the estimated value and the one measured using force sensor. Although the results were promising, there were still some limitations that need to be overcome in future study.

  15. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  16. Combining C- and X-band Weather Radars for Improving Precipitation Estimates over Urban Areas

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk

    of future system state. Accurate and reliable weather radar measurements are, therefore, important for future developments and achievements within urban drainage. This PhD study investigates two types of weather radars. Both systems are in operational use in Denmark today. A network of meteorological C...... individually and owned by local water utility companies. Although the two radar systems use similar working principles, the systems have significant differences regarding technology, temporal resolution, spatial resolution, range and scanning strategy. The focus of the research was to combine the precipitation...

  17. Quantitative investigation of precipitate growth during ageing of Al-(Mg,Si) alloys by energy-filtered electron diffraction

    DEFF Research Database (Denmark)

    Wollgarten, M.; Chang, C. S. T.; Duchstein, Linus Daniel Leonhard

    2011-01-01

    Besides other application fields, light-weight Al-(Mg, Si) (6XXX series) alloys are of substantial importance in automotive industries where they are used for the production of car body panels. The material gains its strength by precipitation of metastable Mg-Si-based phases. Though the general...... accepted that the early stages of precipitate growth are important for the understanding of this peculiar behaviour. During these stages, electron diffraction patterns of Al-(Mg, Si) alloys show diffuse features (Figure 1 (a) and (b)) which can be traced back to originate from β'' Mg5Si6 precipitates [5......-7]. In this paper, we use energy-filtered electron diffraction to determine dimensions of the β'' Mg5Si6 precipitates along their a, b and c-axes as a function of ageing time and alloy composition. In our contribution, we first derive that there is an optimal zone axis - - from the view point of practicability. We...

  18. Multi-scale Quantitative Precipitation Forecasting Using Nonlinear and Nonstationary Teleconnection Signals and Artificial Neural Network Models

    Science.gov (United States)

    Global sea surface temperature (SST) anomalies can affect terrestrial precipitation via ocean-atmosphere interaction known as climate teleconnection. Non-stationary and non-linear characteristics of the ocean-atmosphere system make the identification of the teleconnection signals...

  19. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    Science.gov (United States)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  20. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models.

    Science.gov (United States)

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.

  1. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    Science.gov (United States)

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be

  2. CMORPH 8 Km: A Method that Produces Global Precipitation Estimates from Passive Microwave and Infrared Data at High Spatial and Temporal Resolution

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A new technique is presented in which half-hourly global precipitation estimates derived from passive microwave satellite scans are propagated by motion vectors...

  3. A simulation study of the recession coefficient for antecedent precipitation index. [soil moisture and water runoff estimation

    Science.gov (United States)

    Choudhury, B. J.; Blanchard, B. J.

    1981-01-01

    The antecedent precipitation index (API) is a useful indicator of soil moisture conditions for watershed runoff calculations and recent attempts to correlate this index with spaceborne microwave observations have been fairly successful. It is shown that the prognostic equation for soil moisture used in some of the atmospheric general circulation models together with Thornthwaite-Mather parameterization of actual evapotranspiration leads to API equations. The recession coefficient for API is found to depend on climatic factors through potential evapotranspiration and on soil texture through the field capacity and the permanent wilting point. Climatologial data for Wisconsin together with a recently developed model for global isolation are used to simulate the annual trend of the recession coefficient. Good quantitative agreement is shown with the observed trend at Fennimore and Colby watersheds in Wisconsin. It is suggested that API could be a unifying vocabulary for watershed and atmospheric general circulation modelars.

  4. Stochastic characterization of regional circulation patterns for climate model diagnosis and estimation of local precipitation

    International Nuclear Information System (INIS)

    Zorita, E.; Hughes, J.P.

    1993-01-01

    Two statistical approaches for linking large-scale atmospheric circulation patterns and daily local rainfall are described and applied to several GCM (general circulation model) climate simulations. The ultimate objective is to simulate local precipitation associated with alternative climates. The index stations are located near the West and East North American coasts. The first method is based on CART analysis (Classification and Regression trees). It finds the classification of observed daily SLR (sea level pressure) fields in weather types that are most strongly associated with the presence/absence of rainfall in a set of index stations. The best results were obtained for winter rainfall for the West Coast, where a set of physically reasonable weather types could be identified, whereas for the East Coast the rainfall process seemed to be spatially less coherent. The GCM simulations were validated against observations in terms of probability of occurrence and survival time of these weather states. Some discrepancies werefound but there was no systematic bias, indicating that this behavior depends on the particular dynamics of each model. This classification method was then used for the generation of daily rainfall time series from the daily SLP fields from historical observation and from the GCM simulations. Whereas the mean rainfall and probability distributions were rather well replicated, the simulated dry periods were in all cases shorter than in the rainfall observations. The second rainfall generator is based on the analog method and uses information on the evolution of the SLP field in several previous days. It was found to perform reasonably well, although some downward bias in the simulated rainfall persistence was still present. Rainfall changes in a 2xCO 2 climate were investigated by applying both methods to the output of a greenhouse-gas experiment. The simulated precipitation changes were small. (orig.)

  5. Physiological frailty index (PFI): quantitative in-life estimate of individual biological age in mice.

    Science.gov (United States)

    Antoch, Marina P; Wrobel, Michelle; Kuropatwinski, Karen K; Gitlin, Ilya; Leonova, Katerina I; Toshkov, Ilia; Gleiberman, Anatoli S; Hutson, Alan D; Chernova, Olga B; Gudkov, Andrei V

    2017-03-19

    The development of healthspan-extending pharmaceuticals requires quantitative estimation of age-related progressive physiological decline. In humans, individual health status can be quantitatively assessed by means of a frailty index (FI), a parameter which reflects the scale of accumulation of age-related deficits. However, adaptation of this methodology to animal models is a challenging task since it includes multiple subjective parameters. Here we report a development of a quantitative non-invasive procedure to estimate biological age of an individual animal by creating physiological frailty index (PFI). We demonstrated the dynamics of PFI increase during chronological aging of male and female NIH Swiss mice. We also demonstrated acceleration of growth of PFI in animals placed on a high fat diet, reflecting aging acceleration by obesity and provide a tool for its quantitative assessment. Additionally, we showed that PFI could reveal anti-aging effect of mTOR inhibitor rapatar (bioavailable formulation of rapamycin) prior to registration of its effects on longevity. PFI revealed substantial sex-related differences in normal chronological aging and in the efficacy of detrimental (high fat diet) or beneficial (rapatar) aging modulatory factors. Together, these data introduce PFI as a reliable, non-invasive, quantitative tool suitable for testing potential anti-aging pharmaceuticals in pre-clinical studies.

  6. Quantitative Risk reduction estimation Tool For Control Systems, Suggested Approach and Research Needs

    Energy Technology Data Exchange (ETDEWEB)

    Miles McQueen; Wayne Boyer; Mark Flynn; Sam Alessi

    2006-03-01

    For the past year we have applied a variety of risk assessment technologies to evaluate the risk to critical infrastructure from cyber attacks on control systems. More recently, we identified the need for a stand alone control system risk reduction estimation tool to provide owners and operators of control systems with a more useable, reliable, and credible method for managing the risks from cyber attack. Risk is defined as the probability of a successful attack times the value of the resulting loss, typically measured in lives and dollars. Qualitative and ad hoc techniques for measuring risk do not provide sufficient support for cost benefit analyses associated with cyber security mitigation actions. To address the need for better quantitative risk reduction models we surveyed previous quantitative risk assessment research; evaluated currently available tools; developed new quantitative techniques [17] [18]; implemented a prototype analysis tool to demonstrate how such a tool might be used; used the prototype to test a variety of underlying risk calculational engines (e.g. attack tree, attack graph); and identified technical and research needs. We concluded that significant gaps still exist and difficult research problems remain for quantitatively assessing the risk to control system components and networks, but that a useable quantitative risk reduction estimation tool is not beyond reach.

  7. Estimation of genetic parameters and detection of quantitative trait loci for metabolites in Danish Holstein milk

    DEFF Research Database (Denmark)

    Buitenhuis, Albert Johannes; Sundekilde, Ulrik; Poulsen, Nina Aagaard

    2013-01-01

    Small components and metabolites in milk are significant for the utilization of milk, not only in dairy food production but also as disease predictors in dairy cattle. This study focused on estimation of genetic parameters and detection of quantitative trait loci for metabolites in bovine milk. F...... for lactic acid to >0.8 for orotic acid and β-hydroxybutyrate. A single SNP association analysis revealed 7 genome-wide significant quantitative trait loci [malonate: Bos taurus autosome (BTA)2 and BTA7; galactose-1-phosphate: BTA2; cis-aconitate: BTA11; urea: BTA12; carnitine: BTA25...

  8. A quantitative history of precipitation and hydrologic variability for the last 45 ka: Lake Titicaca, Salar de Coipasa and Salar de Uyuni, Peru and Bolivia

    Science.gov (United States)

    Nunnery, A.; Baker, P. A.; Coe, M. T.; Fritz, S. C.; Rigsby, C. A.

    2011-12-01

    Precipitation on the Bolivian/Peruvian Altiplano is dominantly controlled by the South American summer Monsoon (SASM). Over long timescales moisture transport to the Altiplano by the SASM fluctuates in intensity due to precessional insolation forcing as well as teleconnections to millennial scale abrupt temperature shifts in the North Atlantic. These long-term changes in moisture transport have been observed in multiple paleoclimate and paleo-lake level records as advances and retreats of large lakes in the terminal basin (the Salar de Uyuni). Several previous studies using energy/water balance models have been applied to paleoclimate records in attempts to provide quantitative constraints on past precipitation and temperature (P and T). For example, Blodgett et al. concluded that high paleolake stands, first dated at ca. 16,000 cal. yr BP, required P 20% higher and T 5°C colder than modern. We expand on this work conducting two experiments. The first uses a latitudinal paleohydrologic profile to reconstruct hydrological history. The second uses a terrestrial hydrology model (THMB) to "predict" lake level given changes in P and T. The profile is constructed using records from Lake Titicaca (LT), Salar de Coipasa (SC) and Salar de Uyuni (SU). LT carbonate and diatom records indicate a deep, overflowing lake for much of the last 100 ka with a distinct dry, closed-basin phase in the early to mid Holocene. A continuous sediment core from SC indicates lake level fluctuations between deep and shallow phases for the last 45 ka. A natural gamma radiation log from SU, where large paleolakes alternated with shallow salt pans characteristic of drier and/or warmer periods, shows alternation between wet and dry phases through time. These three records give evidence to the complex nature of Altiplano hydrology, most notably the ability to sustain lakes in the SC basin while exhibiting dry conditions in SU. For the second experiment, THMB, which estimates water balance and

  9. Enhancing Global Land Surface Hydrology Estimates from the NASA MERRA Reanalysis Using Precipitation Observations and Model Parameter Adjustments

    Science.gov (United States)

    Reichle, Rolf; Koster, Randal; DeLannoy, Gabrielle; Forman, Barton; Liu, Qing; Mahanama, Sarith; Toure, Ally

    2011-01-01

    The Modern-Era Retrospective analysis for Research and Applications (MERRA) is a state-of-the-art reanalysis that provides. in addition to atmospheric fields. global estimates of soil moisture, latent heat flux. snow. and runoff for J 979-present. This study introduces a supplemental and improved set of land surface hydrological fields ('MERRA-Land') generated by replaying a revised version of the land component of the MERRA system. Specifically. the MERRA-Land estimates benefit from corrections to the precipitation forcing with the Global Precipitation Climatology Project pentad product (version 2.1) and from revised parameters in the rainfall interception model, changes that effectively correct for known limitations in the MERRA land surface meteorological forcings. The skill (defined as the correlation coefficient of the anomaly time series) in land surface hydrological fields from MERRA and MERRA-Land is assessed here against observations and compared to the skill of the state-of-the-art ERA-Interim reanalysis. MERRA-Land and ERA-Interim root zone soil moisture skills (against in situ observations at 85 US stations) are comparable and significantly greater than that of MERRA. Throughout the northern hemisphere, MERRA and MERRA-Land agree reasonably well with in situ snow depth measurements (from 583 stations) and with snow water equivalent from an independent analysis. Runoff skill (against naturalized stream flow observations from 15 basins in the western US) of MERRA and MERRA-Land is typically higher than that of ERA-Interim. With a few exceptions. the MERRA-Land data appear more accurate than the original MERRA estimates and are thus recommended for those interested in using '\\-tERRA output for land surface hydrological studies.

  10. Estimating spatially and temporally varying recharge and runoff from precipitation and urban irrigation in the Los Angeles Basin, California

    Science.gov (United States)

    Hevesi, Joseph A.; Johnson, Tyler D.

    2016-10-17

    A daily precipitation-runoff model, referred to as the Los Angeles Basin watershed model (LABWM), was used to estimate recharge and runoff for a 5,047 square kilometer study area that included the greater Los Angeles area and all surface-water drainages potentially contributing recharge to a 1,450 square kilometer groundwater-study area underlying the greater Los Angeles area, referred to as the Los Angeles groundwater-study area. The recharge estimates for the Los Angeles groundwater-study area included spatially distributed recharge in response to the infiltration of precipitation, runoff, and urban irrigation, as well as mountain-front recharge from surface-water drainages bordering the groundwater-study area. The recharge and runoff estimates incorporated a new method for estimating urban irrigation, consisting of residential and commercial landscape watering, based on land use and the percentage of pervious land area.The LABWM used a 201.17-meter gridded discretization of the study area to represent spatially distributed climate and watershed characteristics affecting the surface and shallow sub-surface hydrology for the Los Angeles groundwater study area. Climate data from a local network of 201 monitoring sites and published maps of 30-year-average monthly precipitation and maximum and minimum air temperature were used to develop the climate inputs for the LABWM. Published maps of land use, land cover, soils, vegetation, and surficial geology were used to represent the physical characteristics of the LABWM area. The LABWM was calibrated to available streamflow records at six streamflow-gaging stations.Model results for a 100-year target-simulation period, from water years 1915 through 2014, were used to quantify and evaluate the spatial and temporal variability of water-budget components, including evapotranspiration (ET), recharge, and runoff. The largest outflow of water from the LABWM was ET; the 100-year average ET rate of 362 millimeters per year (mm

  11. EPSAT-SG: a satellite method for precipitation estimation; its concepts and implementation for the AMMA experiment

    Directory of Open Access Journals (Sweden)

    J. C. Bergès

    2010-01-01

    Full Text Available This paper presents a new rainfall estimation method, EPSAT-SG which is a frame for method design. The first implementation has been carried out to meet the requirement of the AMMA database on a West African domain. The rainfall estimation relies on two intermediate products: a rainfall probability and a rainfall potential intensity. The first one is computed from MSG/SEVIRI by a feed forward neural network. First evaluation results show better properties than direct precipitation intensity assessment by geostationary satellite infra-red sensors. The second product can be interpreted as a conditional rainfall intensity and, in the described implementation, it is extracted from GPCP-1dd. Various implementation options are discussed and comparison of this embedded product with 3B42 estimates demonstrates the importance of properly managing the temporal discontinuity. The resulting accumulated rainfall field can be presented as a GPCP downscaling. A validation based on ground data supplied by AGRHYMET (Niamey indicates that the estimation error has been reduced in this process. The described method could be easily adapted to other geographical area and operational environment.

  12. Using the ''Epiquant'' automatic analyzer for quantitative estimation of grain size

    Energy Technology Data Exchange (ETDEWEB)

    Tsivirko, E I; Ulitenko, A N; Stetsenko, I A; Burova, N M [Zaporozhskij Mashinostroitel' nyj Inst. (Ukrainian SSR)

    1979-01-01

    Application possibility of the ''Epiquant'' automatic analyzer to estimate qualitatively austenite grain in the 18Kh2N4VA steel has been investigated. Austenite grain has been clarified using the methods of cementation, oxidation and etching of the grain boundaries. Average linear size of grain at the length of 15 mm has been determined according to the total length of grain intersection line and the number of intersections at the boundaries. It is shown that the ''Epiquant'' analyzer ensures quantitative estimation of austenite grain size with relative error of 2-4 %.

  13. Estimation of microwave source location in precipitating electron fluxes according to Viking satellite data

    International Nuclear Information System (INIS)

    Khrushchinskij, A.A.; Ostapenko, A.A.; Gustafsson, G.; Eliasson, L.; Sandal, I.

    1989-01-01

    According to the Viking satellite data on electron fluxes in the 0.1-300 keV energy range, the microburst source location is estimated. On the basis of experimental delays in detected peaks in different energy channels and theoretical calculations of these delays within the dipole field model (L∼ 4-5.5), it is shown that the most probable source location is the equatorial region with the centre, 5-10 0 shifted towards the ionosphere

  14. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma.

    Science.gov (United States)

    Yu, Jinhua; Shi, Zhifeng; Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan; Chen, Liang; Mao, Ying

    2017-08-01

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. • Noninvasive IDH1 status estimation can be obtained with a radiomics approach. • Automatic and quantitative processes were established for noninvasive biomarker estimation. • High-throughput MRI features are highly correlated to IDH1 states. • Area under the ROC curve of the proposed estimation method reached 0.86.

  15. Applications of TRMM-based Multi-Satellite Precipitation Estimation for Global Runoff Simulation: Prototyping a Global Flood Monitoring System

    Science.gov (United States)

    Hong, Yang; Adler, Robert F.; Huffman, George J.; Pierce, Harold

    2008-01-01

    Advances in flood monitoring/forecasting have been constrained by the difficulty in estimating rainfall continuously over space (catchment-, national-, continental-, or even global-scale areas) and flood-relevant time scale. With the recent availability of satellite rainfall estimates at fine time and space resolution, this paper describes a prototype research framework for global flood monitoring by combining real-time satellite observations with a database of global terrestrial characteristics through a hydrologically relevant modeling scheme. Four major components included in the framework are (1) real-time precipitation input from NASA TRMM-based Multi-satellite Precipitation Analysis (TMPA); (2) a central geospatial database to preprocess the land surface characteristics: water divides, slopes, soils, land use, flow directions, flow accumulation, drainage network etc.; (3) a modified distributed hydrological model to convert rainfall to runoff and route the flow through the stream network in order to predict the timing and severity of the flood wave, and (4) an open-access web interface to quickly disseminate flood alerts for potential decision-making. Retrospective simulations for 1998-2006 demonstrate that the Global Flood Monitor (GFM) system performs consistently at both station and catchment levels. The GFM website (experimental version) has been running at near real-time in an effort to offer a cost-effective solution to the ultimate challenge of building natural disaster early warning systems for the data-sparse regions of the world. The interactive GFM website shows close-up maps of the flood risks overlaid on topography/population or integrated with the Google-Earth visualization tool. One additional capability, which extends forecast lead-time by assimilating QPF into the GFM, also will be implemented in the future.

  16. Estimates of increased black carbon emissions from electrostatic precipitators during powdered activated carbon injection for mercury emissions control.

    Science.gov (United States)

    Clack, Herek L

    2012-07-03

    The behavior of mercury sorbents within electrostatic precipitators (ESPs) is not well-understood, despite a decade or more of full-scale testing. Recent laboratory results suggest that powdered activated carbon exhibits somewhat different collection behavior than fly ash in an ESP and particulate filters located at the outlet of ESPs have shown evidence of powdered activated carbon penetration during full-scale tests of sorbent injection for mercury emissions control. The present analysis considers a range of assumed differential ESP collection efficiencies for powdered activated carbon as compared to fly ash. Estimated emission rates of submicrometer powdered activated carbon are compared to estimated emission rates of particulate carbon on submicrometer fly ash, each corresponding to its respective collection efficiency. To the extent that any emitted powdered activated carbon exhibits size and optical characteristics similar to black carbon, such emissions could effectively constitute an increase in black carbon emissions from coal-based stationary power generation. The results reveal that even for the low injection rates associated with chemically impregnated carbons, submicrometer particulate carbon emissions can easily double if the submicrometer fraction of the native fly ash has a low carbon content. Increasing sorbent injection rates, larger collection efficiency differentials as compared to fly ash, and decreasing sorbent particle size all lead to increases in the estimated submicrometer particulate carbon emissions.

  17. Intercomparison of PERSIANN-CDR and TRMM-3B42V7 precipitation estimates at monthly and daily time scales

    Science.gov (United States)

    Katiraie-Boroujerdy, Pari-Sima; Akbari Asanjan, Ata; Hsu, Kuo-lin; Sorooshian, Soroosh

    2017-09-01

    In the first part of this paper, monthly precipitation data from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR) and Tropical Rainfall Measuring Mission 3B42 algorithm Version 7 (TRMM-3B42V7) are evaluated over Iran using the Generalized Three-Cornered Hat (GTCH) method which is self-sufficient of reference data as input. Climate Data Unit (CRU) is added to the GTCH evaluations as an independent gauge-based dataset thus, the minimum requirement of three datasets for the model is satisfied. To ensure consistency of all datasets, the two satellite products were aggregated to 0.5° spatial resolution, which is the minimum resolution of CRU. The results show that the PERSIANN-CDR has higher Signal to Noise Ratio (SNR) than TRMM-3B42V7 for the monthly rainfall estimation, especially in the northern half of the country. All datasets showed low SNR in the mountainous area of southwestern Iran, as well as the arid parts in the southeast region of the country. Additionally, in order to evaluate the efficacy of PERSIANN-CDR and TRMM-3B42V7 in capturing extreme daily-precipitation amounts, an in-situ rain-gauge dataset collected by the Islamic Republic of the Iran Meteorological Organization (IRIMO) was employed. Given the sparsity of the rain gauges, only 0.25° pixels containing three or more gauges were used for this evaluation. There were 228 such pixels where daily and extreme rainfall from PERSIANN-CDR and TRMM-3B42V7 could be compared. However, TRMM-3B42V7 overestimates most of the intensity indices (correlation coefficients; R between 0.7648-0.8311, Root Mean Square Error; RMSE between 3.29mm/day-21.2mm/5day); PERSIANN-CDR underestimates these extremes (R between 0.6349-0.7791 and RMSE between 3.59mm/day-30.56mm/5day). Both satellite products show higher correlation coefficients and lower RMSEs for the annual mean of consecutive dry spells than wet spells. The results show that TRMM-3B42V7

  18. Epithelium percentage estimation facilitates epithelial quantitative protein measurement in tissue specimens.

    Science.gov (United States)

    Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui

    2013-12-01

    The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results

  19. A direct method for estimating the alpha/beta ratio from quantitative dose-response data

    International Nuclear Information System (INIS)

    Stuschke, M.

    1989-01-01

    A one-step optimization method based on a least squares fit of the linear quadratic model to quantitative tissue response data after fractionated irradiation is proposed. Suitable end-points that can be analysed by this method are growth delay, host survival and quantitative biochemical or clinical laboratory data. The functional dependence between the transformed dose and the measured response is approximated by a polynomial. The method allows for the estimation of the alpha/beta ratio and its confidence limits from all observed responses of the different fractionation schedules. Censored data can be included in the analysis. A method to test the appropriateness of the fit is presented. A computer simulation illustrates the method and its accuracy as examplified by the growth delay end point. A comparison with a fit of the linear quadratic model to interpolated isoeffect doses shows the advantages of the direct method. (orig./HP) [de

  20. Analytical performance of refractometry in quantitative estimation of isotopic concentration of heavy water in nuclear reactor

    International Nuclear Information System (INIS)

    Dhole, K.; Ghosh, S.; Datta, A.; Tripathy, M.K.; Bose, H.; Roy, M.; Tyagi, A.K.

    2011-01-01

    The method of refractometry has been investigated for the quantitative estimation of isotopic concentration of D 2 O (heavy water) in a simulated water sample. Viability of Refractometry as an excellent analytical technique for rapid and non-invasive determination of D 2 O concentration in water samples has been demonstrated. Temperature of the samples was precisely controlled to eliminate effect of temperature fluctuation on refractive index measurement. Calibration performance by this technique exhibited reasonable analytical response over a wide range (1-100%) of D 2 O concentration. (author)

  1. Quantitative estimation of hemorrhage in chronic subdural hematoma using the 51Cr erythrocyte labeling method

    International Nuclear Information System (INIS)

    Ito, H.; Yamamoto, S.; Saito, K.; Ikeda, K.; Hisada, K.

    1987-01-01

    Red cell survival studies using an infusion of chromium-51-labeled erythrocytes were performed to quantitatively estimate hemorrhage in the chronic subdural hematoma cavity of 50 patients. The amount of hemorrhage was determined during craniotomy. Between 6 and 24 hours after infusion of the labeled red cells, hemorrhage accounted for a mean of 6.7% of the hematoma content, indicating continuous or intermittent hemorrhage into the cavity. The clinical state of the patients and the density of the chronic subdural hematoma on computerized tomography scans were related to the amount of hemorrhage. Chronic subdural hematomas with a greater amount of hemorrhage frequently consisted of clots rather than fluid

  2. Comparison of NEXRAD multisensor precipitation estimates to rain gage observations in and near DuPage County, Illinois, 2002–12

    Science.gov (United States)

    Spies, Ryan R.; Over, Thomas M.; Ortel, Terry W.

    2018-05-21

    In this report, precipitation data from 2002 to 2012 from the hourly gridded Next-Generation Radar (NEXRAD)-based Multisensor Precipitation Estimate (MPE) precipitation product are compared to precipitation data from two rain gage networks—an automated tipping bucket network of 25 rain gages operated by the U.S. Geological Survey (USGS) and 51 rain gages from the volunteer-operated Community Collaborative Rain, Hail, and Snow (CoCoRaHS) network—in and near DuPage County, Illinois, at a daily time step to test for long-term differences in space, time, and distribution. The NEXRAD–MPE data that are used are from the fifty 2.5-mile grid cells overlying the rain gages from the other networks. Because of the challenges of measuring of frozen precipitation, the analysis period is separated between days with or without the chance of freezing conditions. The NEXRAD–MPE and tipping-bucket rain gage precipitation data are adjusted to account for undercatch by multiplying by a previously determined factor of 1.14. Under nonfreezing conditions, the three precipitation datasets are broadly similar in cumulative depth and distribution of daily values when the data are combined spatially across the networks. However, the NEXRAD–MPE data indicate a significant trend relative to both rain gage networks as a function of distance from the NEXRAD radar just south of the study area. During freezing conditions, of the USGS network rain gages only the heated gages were considered, and these gages indicate substantial mean undercatch of 50 and 61 percent compared to the NEXRAD–MPE and the CoCoRaHS gages, respectively. The heated USGS rain gages also indicate substantially lower quantile values during freezing conditions, except during the most extreme (highest) events. Because NEXRAD precipitation products are continually evolving, the report concludes with a discussion of recent changes in those products and their potential for improved precipitation estimation. An appendix

  3. A Quantitative Property-Property Relationship for Estimating Packaging-Food Partition Coefficients of Organic Compounds

    DEFF Research Database (Denmark)

    Huang, L.; Ernstoff, Alexi; Xu, H.

    2017-01-01

    Organic chemicals encapsulated in beverage and food packaging can migrate to the food and lead to human exposures via ingestion. The packaging-food (Kpf) partition coefficient is a key parameter to estimate the chemical migration from packaging materials. Previous studies have simply set Kpf to 1...... or 1000, or provided separate linear correlations for several discrete values of ethanol equivalencies of food simulants (EtOH-eq). The aim of the present study is to develop a single quantitative property-property relationship (QPPR) valid for different chemical-packaging combinations and for water...... because only two packaging types are included. This preliminary QPPR demonstrates that the Kpf for various chemicalpackaging-food combinations can be estimated by a single linear correlation. Based on more than 1000 collected Kpf in 15 materials, we will present extensive results for other packaging types...

  4. Combination of methylated-DNA precipitation and methylation-sensitive restriction enzymes (COMPARE-MS) for the rapid, sensitive and quantitative detection of DNA methylation.

    Science.gov (United States)

    Yegnasubramanian, Srinivasan; Lin, Xiaohui; Haffner, Michael C; DeMarzo, Angelo M; Nelson, William G

    2006-02-09

    Hypermethylation of CpG island (CGI) sequences is a nearly universal somatic genome alteration in cancer. Rapid and sensitive detection of DNA hypermethylation would aid in cancer diagnosis and risk stratification. We present a novel technique, called COMPARE-MS, that can rapidly and quantitatively detect CGI hypermethylation with high sensitivity and specificity in hundreds of samples simultaneously. To quantitate CGI hypermethylation, COMPARE-MS uses real-time PCR of DNA that was first digested by methylation-sensitive restriction enzymes and then precipitated by methyl-binding domain polypeptides immobilized on a magnetic solid matrix. We show that COMPARE-MS could detect five genome equivalents of methylated CGIs in a 1000- to 10,000-fold excess of unmethylated DNA. COMPARE-MS was used to rapidly quantitate hypermethylation at multiple CGIs in >155 prostate tissues, including benign and malignant prostate specimens, and prostate cell lines. This analysis showed that GSTP1, MDR1 and PTGS2 CGI hypermethylation as determined by COMPARE-MS could differentiate between malignant and benign prostate with sensitivities >95% and specificities approaching 100%. This novel technology could significantly improve our ability to detect CGI hypermethylation.

  5. Health Impacts of Increased Physical Activity from Changes in Transportation Infrastructure: Quantitative Estimates for Three Communities

    Science.gov (United States)

    2015-01-01

    Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7–30.6), 0.6 (0.3–0.9), and 4.7 (2.1–7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches. PMID:26504832

  6. Precipitation Data Merging over Mountainous Areas Using Satellite Estimates and Sparse Gauge Observations (PDMMA-USESGO) for Hydrological Modeling — A Case Study over the Tibetan Plateau

    Science.gov (United States)

    Yang, Z.; Hsu, K. L.; Sorooshian, S.; Xu, X.

    2017-12-01

    Precipitation in mountain regions generally occurs with high-frequency-intensity, whereas it is not well-captured by sparsely distributed rain-gauges imposing a great challenge on water management. Satellite-based Precipitation Estimation (SPE) provides global high-resolution alternative data for hydro-climatic studies, but are subject to considerable biases. In this study, a model named PDMMA-USESGO for Precipitation Data Merging over Mountainous Areas Using Satellite Estimates and Sparse Gauge Observations is developed to support precipitation mapping and hydrological modeling in mountainous catchments. The PDMMA-USESGO framework includes two calculating steps—adjusting SPE biases and merging satellite-gauge estimates—using the quantile mapping approach, a two-dimensional Gaussian weighting scheme (considering elevation effect), and an inverse root mean square error weighting method. The model is applied and evaluated over the Tibetan Plateau (TP) with the PERSIANN-CCS precipitation retrievals (daily, 0.04°×0.04°) and sparse observations from 89 gauges, for the 11-yr period of 2003-2013. To assess the data merging effects on streamflow modeling, a hydrological evaluation is conducted over a watershed in southeast TP based on the Soil and Water Assessment Tool (SWAT). Evaluation results indicate effectiveness of the model in generating high-resolution-accuracy precipitation estimates over mountainous terrain, with the merged estimates (Mer-SG) presenting consistently improved correlation coefficients, root mean square errors and absolute mean biases from original satellite estimates (Ori-CCS). It is found the Mer-SG forced streamflow simulations exhibit great improvements from those simulations using Ori-CCS, with coefficient of determination (R2) and Nash-Sutcliffe efficiency reach to 0.8 and 0.65, respectively. The presented model and case study serve as valuable references for the hydro-climatic applications using remote sensing-gauge information in

  7. Estimation of the number of fluorescent end-members for quantitative analysis of multispectral FLIM data.

    Science.gov (United States)

    Gutierrez-Navarro, Omar; Campos-Delgado, Daniel U; Arce-Santana, Edgar R; Maitland, Kristen C; Cheng, Shuna; Jabbour, Joey; Malik, Bilal; Cuenca, Rodrigo; Jo, Javier A

    2014-05-19

    Multispectral fluorescence lifetime imaging (m-FLIM) can potentially allow identifying the endogenous fluorophores present in biological tissue. Quantitative description of such data requires estimating the number of components in the sample, their characteristic fluorescent decays, and their relative contributions or abundances. Unfortunately, this inverse problem usually requires prior knowledge about the data, which is seldom available in biomedical applications. This work presents a new methodology to estimate the number of potential endogenous fluorophores present in biological tissue samples from time-domain m-FLIM data. Furthermore, a completely blind linear unmixing algorithm is proposed. The method was validated using both synthetic and experimental m-FLIM data. The experimental m-FLIM data include in-vivo measurements from healthy and cancerous hamster cheek-pouch epithelial tissue, and ex-vivo measurements from human coronary atherosclerotic plaques. The analysis of m-FLIM data from in-vivo hamster oral mucosa identified healthy from precancerous lesions, based on the relative concentration of their characteristic fluorophores. The algorithm also provided a better description of atherosclerotic plaques in term of their endogenous fluorophores. These results demonstrate the potential of this methodology to provide quantitative description of tissue biochemical composition.

  8. Quantitative pre-surgical lung function estimation with SPECT/CT

    International Nuclear Information System (INIS)

    Bailey, D. L.; Willowson, K. P.; Timmins, S.; Harris, B. E.; Bailey, E. A.; Roach, P. J.

    2009-01-01

    Full text:Objectives: To develop methodology to predict lobar lung function based on SPECT/CT ventilation and perfusion (V/Q) scanning in candidates for lobectomy for lung cancer. Methods: This combines two development areas from our group: quantitative SPECT based on CT-derived corrections for scattering and attenuation of photons, and SPECT V/Q scanning with lobar segmentation from CT. Eight patients underwent baseline pulmonary function testing (PFT) including spirometry, measure of DLCO and cario-pulmonary exercise testing. A SPECT/CT V/Q scan was acquired at baseline. Using in-house software each lobe was anatomically defined using CT to provide lobar ROIs which could be applied to the SPECT data. From these, individual lobar contribution to overall function was calculated from counts within the lobe and post-operative FEV1, DLCO and VO2 peak were predicted. This was compared with the quantitative planar scan method using 3 rectangular ROIs over each lung. Results: Post-operative FEV1 most closely matched that predicted by the planar quantification method, with SPECT V/Q over-estimating the loss of function by 8% (range - 7 - +23%). However, post-operative DLCO and VO2 peak were both accurately predicted by SPECT V/Q (average error of 0 and 2% respectively) compared with planar. Conclusions: More accurate anatomical definition of lobar anatomy provides better estimates of post-operative loss of function for DLCO and VO2 peak than traditional planar methods. SPECT/CT provides the tools for accurate anatomical defintions of the surgical target as well as being useful in producing quantitative 3D functional images for ventilation and perfusion.

  9. Quantitative estimation of carbonation and chloride penetration in reinforced concrete by laser-induced breakdown spectroscopy

    Science.gov (United States)

    Eto, Shuzo; Matsuo, Toyofumi; Matsumura, Takuro; Fujii, Takashi; Tanaka, Masayoshi Y.

    2014-11-01

    The penetration profile of chlorine in a reinforced concrete (RC) specimen was determined by laser-induced breakdown spectroscopy (LIBS). The concrete core was prepared from RC beams with cracking damage induced by bending load and salt water spraying. LIBS was performed using a specimen that was obtained by splitting the concrete core, and the line scan of laser pulses gave the two-dimensional emission intensity profiles of 100 × 80 mm2 within one hour. The two-dimensional profile of the emission intensity suggests that the presence of the crack had less effect on the emission intensity when the measurement interval was larger than the crack width. The chlorine emission spectrum was measured without using the buffer gas, which is usually used for chlorine measurement, by collinear double-pulse LIBS. The apparent diffusion coefficient, which is one of the most important parameters for chloride penetration in concrete, was estimated using the depth profile of chlorine emission intensity and Fick's law. The carbonation depth was estimated on the basis of the relationship between carbon and calcium emission intensities. When the carbon emission intensity was statistically higher than the calcium emission intensity at the measurement point, we determined that the point was carbonated. The estimation results were consistent with the spraying test results using phenolphthalein solution. These results suggest that the quantitative estimation by LIBS of carbonation depth and chloride penetration can be performed simultaneously.

  10. Noninvasive IDH1 mutation estimation based on a quantitative radiomics approach for grade II glioma

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jinhua [Fudan University, Department of Electronic Engineering, Shanghai (China); Computing and Computer-Assisted Intervention, Key Laboratory of Medical Imaging, Shanghai (China); Shi, Zhifeng; Chen, Liang; Mao, Ying [Fudan University, Department of Neurosurgery, Huashan Hospital, Shanghai (China); Lian, Yuxi; Li, Zeju; Liu, Tongtong; Gao, Yuan; Wang, Yuanyuan [Fudan University, Department of Electronic Engineering, Shanghai (China)

    2017-08-15

    The status of isocitrate dehydrogenase 1 (IDH1) is highly correlated with the development, treatment and prognosis of glioma. We explored a noninvasive method to reveal IDH1 status by using a quantitative radiomics approach for grade II glioma. A primary cohort consisting of 110 patients pathologically diagnosed with grade II glioma was retrospectively studied. The radiomics method developed in this paper includes image segmentation, high-throughput feature extraction, radiomics sequencing, feature selection and classification. Using the leave-one-out cross-validation (LOOCV) method, the classification result was compared with the real IDH1 situation from Sanger sequencing. Another independent validation cohort containing 30 patients was utilised to further test the method. A total of 671 high-throughput features were extracted and quantized. 110 features were selected by improved genetic algorithm. In LOOCV, the noninvasive IDH1 status estimation based on the proposed approach presented an estimation accuracy of 0.80, sensitivity of 0.83 and specificity of 0.74. Area under the receiver operating characteristic curve reached 0.86. Further validation on the independent cohort of 30 patients produced similar results. Radiomics is a potentially useful approach for estimating IDH1 mutation status noninvasively using conventional T2-FLAIR MRI images. The estimation accuracy could potentially be improved by using multiple imaging modalities. (orig.)

  11. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  12. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  13. Quantifying the Extent of Emphysema : Factors Associated with Radiologists' Estimations and Quantitative Indices of Emphysema Severity Using the ECLIPSE Cohort

    NARCIS (Netherlands)

    Gietema, Hester A.; Mueller, Nestor L.; Fauerbach, Paola V. Nasute; Sharma, Sanjay; Edwards, Lisa D.; Camp, Pat G.; Coxson, Harvey O.

    Rationale and Objectives: This study investigated what factors radiologists take into account when estimating emphysema severity and assessed quantitative computed tomography (CT) measurements of low attenuation areas. Materials and Methods: CT scans and spirometry were obtained on 1519 chronic

  14. Empirical model for mean temperature for Indian zone and estimation of precipitable water vapor from ground based GPS measurements

    Directory of Open Access Journals (Sweden)

    C. Suresh Raju

    2007-10-01

    Full Text Available Estimation of precipitable water (PW in the atmosphere from ground-based Global Positioning System (GPS essentially involves modeling the zenith hydrostatic delay (ZHD in terms of surface Pressure (Ps and subtracting it from the corresponding values of zenith tropospheric delay (ZTD to estimate the zenith wet (non-hydrostatic delay (ZWD. This further involves establishing an appropriate model connecting PW and ZWD, which in its simplest case assumed to be similar to that of ZHD. But when the temperature variations are large, for the accurate estimate of PW the variation of the proportionality constant connecting PW and ZWD is to be accounted. For this a water vapor weighted mean temperature (Tm has been defined by many investigations, which has to be modeled on a regional basis. For estimating PW over the Indian region from GPS data, a region specific model for Tm in terms of surface temperature (Ts is developed using the radiosonde measurements from eight India Meteorological Department (IMD stations spread over the sub-continent within a latitude range of 8.5°–32.6° N. Following a similar procedure Tm-based models are also evolved for each of these stations and the features of these site-specific models are compared with those of the region-specific model. Applicability of the region-specific and site-specific Tm-based models in retrieving PW from GPS data recorded at the IGS sites Bangalore and Hyderabad, is tested by comparing the retrieved values of PW with those estimated from the altitude profile of water vapor measured using radiosonde. The values of ZWD estimated at 00:00 UTC and 12:00 UTC are used to test the validity of the models by estimating the PW using the models and comparing it with those obtained from radiosonde data. The region specific Tm-based model is found to be in par with if not better than a

  15. An improved export coefficient model to estimate non-point source phosphorus pollution risks under complex precipitation and terrain conditions.

    Science.gov (United States)

    Cheng, Xian; Chen, Liding; Sun, Ranhao; Jing, Yongcai

    2018-05-15

    To control non-point source (NPS) pollution, it is important to estimate NPS pollution exports and identify sources of pollution. Precipitation and terrain have large impacts on the export and transport of NPS pollutants. We established an improved export coefficient model (IECM) to estimate the amount of agricultural and rural NPS total phosphorus (TP) exported from the Luanhe River Basin (LRB) in northern China. The TP concentrations of rivers from 35 selected catchments in the LRB were used to test the model's explanation capacity and accuracy. The simulation results showed that, in 2013, the average TP export was 57.20 t at the catchment scale. The mean TP export intensity in the LRB was 289.40 kg/km 2 , which was much higher than those of other basins in China. In the LRB topographic regions, the TP export intensity was the highest in the south Yanshan Mountains and was followed by the plain area, the north Yanshan Mountains, and the Bashang Plateau. Among the three pollution categories, the contribution ratios to TP export were, from high to low, the rural population (59.44%), livestock husbandry (22.24%), and land-use types (18.32%). Among all ten pollution sources, the contribution ratios from the rural population (59.44%), pigs (14.40%), and arable land (10.52%) ranked as the top three sources. This study provides information that decision makers and planners can use to develop sustainable measures for the prevention and control of NPS pollution in semi-arid regions.

  16. A Novel Method of Quantitative Anterior Chamber Depth Estimation Using Temporal Perpendicular Digital Photography.

    Science.gov (United States)

    Zamir, Ehud; Kong, George Y X; Kowalski, Tanya; Coote, Michael; Ang, Ghee Soon

    2016-07-01

    We hypothesize that: (1) Anterior chamber depth (ACD) is correlated with the relative anteroposterior position of the pupillary image, as viewed from the temporal side. (2) Such a correlation may be used as a simple quantitative tool for estimation of ACD. Two hundred sixty-six phakic eyes had lateral digital photographs taken from the temporal side, perpendicular to the visual axis, and underwent optical biometry (Nidek AL scanner). The relative anteroposterior position of the pupillary image was expressed using the ratio between: (1) lateral photographic temporal limbus to pupil distance ("E") and (2) lateral photographic temporal limbus to cornea distance ("Z"). In the first chronological half of patients (Correlation Series), E:Z ratio (EZR) was correlated with optical biometric ACD. The correlation equation was then used to predict ACD in the second half of patients (Prediction Series) and compared to their biometric ACD for agreement analysis. A strong linear correlation was found between EZR and ACD, R = -0.91, R 2 = 0.81. Bland-Altman analysis showed good agreement between predicted ACD using this method and the optical biometric ACD. The mean error was -0.013 mm (range -0.377 to 0.336 mm), standard deviation 0.166 mm. The 95% limits of agreement were ±0.33 mm. Lateral digital photography and EZR calculation is a novel method to quantitatively estimate ACD, requiring minimal equipment and training. EZ ratio may be employed in screening for angle closure glaucoma. It may also be helpful in outpatient medical clinic settings, where doctors need to judge the safety of topical or systemic pupil-dilating medications versus their risk of triggering acute angle closure glaucoma. Similarly, non ophthalmologists may use it to estimate the likelihood of acute angle closure glaucoma in emergency presentations.

  17. Application of short-wave infrared (SWIR) spectroscopy in quantitative estimation of clay mineral contents

    International Nuclear Information System (INIS)

    You, Jinfeng; Xing, Lixin; Pan, Jun; Meng, Tao; Liang, Liheng

    2014-01-01

    Clay minerals are significant constituents of soil which are necessary for life. This paper studied three types of clay minerals, kaolinite, illite, and montmorillonite, for they are not only the most common soil forming materials, but also important indicators of soil expansion and shrinkage potential. These clay minerals showed diagnostic absorption bands resulting from vibrations of hydroxyl groups and structural water molecules in the SWIR wavelength region. The short-wave infrared reflectance spectra of the soil was obtained from a Portable Near Infrared Spectrometer (PNIS, spectrum range: 1300∼2500 nm, interval: 2 nm). Due to the simplicity, quickness, and the non-destructiveness analysis, SWIR spectroscopy has been widely used in geological prospecting, chemical engineering and many other fields. The aim of this study was to use multiple linear regression (MLR) and partial least squares (PLS) regression to establish the optimizing quantitative estimation models of the kaolinite, illite and montmorillonite contents from soil reflectance spectra. Here, the soil reflectance spectra mainly refers to the spectral reflectivity of soil (SRS) corresponding to the absorption-band position (AP) of kaolinite, illite, and montmorillonite representative spectra from USGS spectral library, the SRS corresponding to the AP of soil spectral and soil overall spectrum reflectance values. The optimal estimation models of three kinds of clay mineral contents showed that the retrieval accuracy was satisfactory (Kaolinite content: a Root Mean Square Error of Calibration (RMSEC) of 1.671 with a coefficient of determination (R 2 ) of 0.791; Illite content: a RMSEC of 1.126 with a R 2 of 0.616; Montmorillonite content: a RMSEC of 1.814 with a R 2 of 0.707). Thus, the reflectance spectra of soil obtained form PNIS could be used for quantitative estimation of kaolinite, illite and montmorillonite contents in soil

  18. On-line estimation of the dissolved zinc concentration during ZnS precipitation in a continuous stirred tank reactor (CSTR)

    NARCIS (Netherlands)

    Grootscholten, T.I.M.; Keesman, K.J.; Lens, P.N.L.

    2008-01-01

    In this paper a method is presented to estimate the reaction term of zinc sulphide precipitation and the zinc concentration in a CSTR, using the read-out signal of a sulphide selective electrode. The reaction between zinc and sulphide is described by a non-linear model and therefore classical

  19. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    Energy Technology Data Exchange (ETDEWEB)

    Tadayyon, Hadi [Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Sadeghi-Naini, Ali; Czarnota, Gregory, E-mail: Gregory.Czarnota@sunnybrook.ca [Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Radiation Oncology, Faculty of Medicine, University of Toronto, Toronto, Ontario M5T 1P5 (Canada); Wirtzfeld, Lauren [Department of Physics, Ryerson University, Toronto, Ontario M5B 2K3 (Canada); Wright, Frances C. [Division of Surgical Oncology, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada)

    2014-01-15

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  20. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    International Nuclear Information System (INIS)

    Tadayyon, Hadi; Sadeghi-Naini, Ali; Czarnota, Gregory; Wirtzfeld, Lauren; Wright, Frances C.

    2014-01-01

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  1. Developing Daily Quantitative Damage Estimates From Geospatial Layers To Support Post Event Recovery

    Science.gov (United States)

    Woods, B. K.; Wei, L. H.; Connor, T. C.

    2014-12-01

    With the growth of natural hazard data available in near real-time it is increasingly feasible to deliver damage estimates caused by natural disasters. These estimates can be used in disaster management setting or by commercial entities to optimize the deployment of resources and/or routing of goods and materials. This work outlines an end-to-end, modular process to generate estimates of damage caused by severe weather. The processing stream consists of five generic components: 1) Hazard modules that provide quantitate data layers for each peril. 2) Standardized methods to map the hazard data to an exposure layer based on atomic geospatial blocks. 3) Peril-specific damage functions that compute damage metrics at the atomic geospatial block level. 4) Standardized data aggregators, which map damage to user-specific geometries. 5) Data dissemination modules, which provide resulting damage estimates in a variety of output forms. This presentation provides a description of this generic tool set, and an illustrated example using HWRF-based hazard data for Hurricane Arthur (2014). In this example, the Python-based real-time processing ingests GRIB2 output from the HWRF numerical model, dynamically downscales it in conjunctions with a land cover database using a multiprocessing pool, and a just-in-time compiler (JIT). The resulting wind fields are contoured, and ingested into a PostGIS database using OGR. Finally, the damage estimates are calculated at the atomic block level and aggregated to user-defined regions using PostgreSQL queries to construct application specific tabular and graphics output.

  2. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    Directory of Open Access Journals (Sweden)

    Noah Zaitlen

    2013-05-01

    Full Text Available Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  3. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    Science.gov (United States)

    Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L

    2013-05-01

    Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  4. Improved dose–volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    International Nuclear Information System (INIS)

    Cheng Lishui; Hobbs, Robert F; Sgouros, George; Frey, Eric C; Segars, Paul W

    2013-01-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose–volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator–detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  5. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    Science.gov (United States)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  6. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    Science.gov (United States)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  7. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    Science.gov (United States)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  8. Reef-associated crustacean fauna: biodiversity estimates using semi-quantitative sampling and DNA barcoding

    Science.gov (United States)

    Plaisance, L.; Knowlton, N.; Paulay, G.; Meyer, C.

    2009-12-01

    The cryptofauna associated with coral reefs accounts for a major part of the biodiversity in these ecosystems but has been largely overlooked in biodiversity estimates because the organisms are hard to collect and identify. We combine a semi-quantitative sampling design and a DNA barcoding approach to provide metrics for the diversity of reef-associated crustacean. Twenty-two similar-sized dead heads of Pocillopora were sampled at 10 m depth from five central Pacific Ocean localities (four atolls in the Northern Line Islands and in Moorea, French Polynesia). All crustaceans were removed, and partial cytochrome oxidase subunit I was sequenced from 403 individuals, yielding 135 distinct taxa using a species-level criterion of 5% similarity. Most crustacean species were rare; 44% of the OTUs were represented by a single individual, and an additional 33% were represented by several specimens found only in one of the five localities. The Northern Line Islands and Moorea shared only 11 OTUs. Total numbers estimated by species richness statistics (Chao1 and ACE) suggest at least 90 species of crustaceans in Moorea and 150 in the Northern Line Islands for this habitat type. However, rarefaction curves for each region failed to approach an asymptote, and Chao1 and ACE estimators did not stabilize after sampling eight heads in Moorea, so even these diversity figures are underestimates. Nevertheless, even this modest sampling effort from a very limited habitat resulted in surprisingly high species numbers.

  9. A comparison of monthly precipitation point estimates at 6 locations in Iran using integration of soft computing methods and GARCH time series model

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2017-11-01

    Precipitation plays an important role in determining the climate of a region. Precise estimation of precipitation is required to manage and plan water resources, as well as other related applications such as hydrology, climatology, meteorology and agriculture. Time series of hydrologic variables such as precipitation are composed of deterministic and stochastic parts. Despite this fact, the stochastic part of the precipitation data is not usually considered in modeling of precipitation process. As an innovation, the present study introduces three new hybrid models by integrating soft computing methods including multivariate adaptive regression splines (MARS), Bayesian networks (BN) and gene expression programming (GEP) with a time series model, namely generalized autoregressive conditional heteroscedasticity (GARCH) for modeling of the monthly precipitation. For this purpose, the deterministic (obtained by soft computing methods) and stochastic (obtained by GARCH time series model) parts are combined with each other. To carry out this research, monthly precipitation data of Babolsar, Bandar Anzali, Gorgan, Ramsar, Tehran and Urmia stations with different climates in Iran were used during the period of 1965-2014. Root mean square error (RMSE), relative root mean square error (RRMSE), mean absolute error (MAE) and determination coefficient (R2) were employed to evaluate the performance of conventional/single MARS, BN and GEP, as well as the proposed MARS-GARCH, BN-GARCH and GEP-GARCH hybrid models. It was found that the proposed novel models are more precise than single MARS, BN and GEP models. Overall, MARS-GARCH and BN-GARCH models yielded better accuracy than GEP-GARCH. The results of the present study confirmed the suitability of proposed methodology for precise modeling of precipitation.

  10. Comparison of conventional, model-based quantitative planar, and quantitative SPECT image processing methods for organ activity estimation using In-111 agents

    International Nuclear Information System (INIS)

    He, Bin; Frey, Eric C

    2006-01-01

    Accurate quantification of organ radionuclide uptake is important for patient-specific dosimetry. The quantitative accuracy from conventional conjugate view methods is limited by overlap of projections from different organs and background activity, and attenuation and scatter. In this work, we propose and validate a quantitative planar (QPlanar) processing method based on maximum likelihood (ML) estimation of organ activities using 3D organ VOIs and a projector that models the image degrading effects. Both a physical phantom experiment and Monte Carlo simulation (MCS) studies were used to evaluate the new method. In these studies, the accuracies and precisions of organ activity estimates for the QPlanar method were compared with those from conventional planar (CPlanar) processing methods with various corrections for scatter, attenuation and organ overlap, and a quantitative SPECT (QSPECT) processing method. Experimental planar and SPECT projections and registered CT data from an RSD Torso phantom were obtained using a GE Millenium VH/Hawkeye system. The MCS data were obtained from the 3D NCAT phantom with organ activity distributions that modelled the uptake of 111 In ibritumomab tiuxetan. The simulations were performed using parameters appropriate for the same system used in the RSD torso phantom experiment. The organ activity estimates obtained from the CPlanar, QPlanar and QSPECT methods from both experiments were compared. From the results of the MCS experiment, even with ideal organ overlap correction and background subtraction, CPlanar methods provided limited quantitative accuracy. The QPlanar method with accurate modelling of the physical factors increased the quantitative accuracy at the cost of requiring estimates of the organ VOIs in 3D. The accuracy of QPlanar approached that of QSPECT, but required much less acquisition and computation time. Similar results were obtained from the physical phantom experiment. We conclude that the QPlanar method, based

  11. Unit rupture work as a criterion for quantitative estimation of hardenability in steel

    International Nuclear Information System (INIS)

    Kramarov, M.A.; Orlov, E.D.; Rybakov, A.B.

    1980-01-01

    Shown is possible utilization of high sensitivity of resistance to fracture of structural steel to the hardenability degree in the course of hardening to find the quantitative estimation of the latter one. Proposed is a criterion kappa, the ratio of the unit rupture work in the case of incomplete hardenability (asub(Tsub(ih))) under investigation, and the analoguc value obtained in the case of complete hardenability Asub(Tsub(Ch)) at the testing temperature corresponding to the critical temperature Tsub(100(M). Confirmed is high criterion sensitivity of the hardened steel structure on the basis of experimental investigation of the 40Kh, 38KhNM and 38KhNMFA steels after isothermal hold-up at different temperatures, corresponding to production of various products of austenite decomposition

  12. The effect of volume-of-interest misregistration on quantitative planar activity and dose estimation

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B

    2010-01-01

    In targeted radionuclide therapy (TRT), dose estimation is essential for treatment planning and tumor dose response studies. Dose estimates are typically based on a time series of whole-body conjugate view planar or SPECT scans of the patient acquired after administration of a planning dose. Quantifying the activity in the organs from these studies is an essential part of dose estimation. The quantitative planar (QPlanar) processing method involves accurate compensation for image degrading factors and correction for organ and background overlap via the combination of computational models of the image formation process and 3D volumes of interest defining the organs to be quantified. When the organ VOIs are accurately defined, the method intrinsically compensates for attenuation, scatter and partial volume effects, as well as overlap with other organs and the background. However, alignment between the 3D organ volume of interest (VOIs) used in QPlanar processing and the true organ projections in the planar images is required. The aim of this research was to study the effects of VOI misregistration on the accuracy and precision of organ activity estimates obtained using the QPlanar method. In this work, we modeled the degree of residual misregistration that would be expected after an automated registration procedure by randomly misaligning 3D SPECT/CT images, from which the VOI information was derived, and planar images. Mutual information-based image registration was used to align the realistic simulated 3D SPECT images with the 2D planar images. The residual image misregistration was used to simulate realistic levels of misregistration and allow investigation of the effects of misregistration on the accuracy and precision of the QPlanar method. We observed that accurate registration is especially important for small organs or ones with low activity concentrations compared to neighboring organs. In addition, residual misregistration gave rise to a loss of precision

  13. Accuracy in the estimation of quantitative minimal area from the diversity/area curve.

    Science.gov (United States)

    Vives, Sergi; Salicrú, Miquel

    2005-05-01

    The problem of representativity is fundamental in ecological studies. A qualitative minimal area that gives a good representation of species pool [C.M. Bouderesque, Methodes d'etude qualitative et quantitative du benthos (en particulier du phytobenthos), Tethys 3(1) (1971) 79] can be discerned from a quantitative minimal area which reflects the structural complexity of community [F.X. Niell, Sobre la biologia de Ascophyllum nosodum (L.) Le Jolis en Galicia, Invest. Pesq. 43 (1979) 501]. This suggests that the populational diversity can be considered as the value of the horizontal asymptote corresponding to the curve sample diversity/biomass [F.X. Niell, Les applications de l'index de Shannon a l'etude de la vegetation interdidale, Soc. Phycol. Fr. Bull. 19 (1974) 238]. In this study we develop a expression to determine minimal areas and use it to obtain certain information about the community structure based on diversity/area curve graphs. This expression is based on the functional relationship between the expected value of the diversity and the sample size used to estimate it. In order to establish the quality of the estimation process, we obtained the confidence intervals as a particularization of the functional (h-phi)-entropies proposed in [M. Salicru, M.L. Menendez, D. Morales, L. Pardo, Asymptotic distribution of (h,phi)-entropies, Commun. Stat. (Theory Methods) 22 (7) (1993) 2015]. As an example used to demonstrate the possibilities of this method, and only for illustrative purposes, data about a study on the rocky intertidal seawed populations in the Ria of Vigo (N.W. Spain) are analyzed [F.X. Niell, Estudios sobre la estructura, dinamica y produccion del Fitobentos intermareal (Facies rocosa) de la Ria de Vigo. Ph.D. Mem. University of Barcelona, Barcelona, 1979].

  14. Fatalities in high altitude mountaineering: a review of quantitative risk estimates.

    Science.gov (United States)

    Weinbruch, Stephan; Nordby, Karl-Christian

    2013-12-01

    Quantitative estimates for mortality in high altitude mountaineering are reviewed. Special emphasis is placed on the heterogeneity of the risk estimates and on confounding. Crude estimates for mortality are on the order of 1/1000 to 40/1000 persons above base camp, for both expedition members and high altitude porters. High altitude porters have mostly a lower risk than expedition members (risk ratio for all Nepalese peaks requiring an expedition permit: 0.73; 95 % confidence interval 0.59-0.89). The summit bid is generally the most dangerous part of an expedition for members, whereas most high altitude porters die during route preparation. On 8000 m peaks, the mortality during descent from summit varies between 4/1000 and 134/1000 summiteers (members plus porters). The risk estimates are confounded by human and environmental factors. Information on confounding by gender and age is contradictory and requires further work. There are indications for safety segregation of men and women, with women being more risk averse than men. Citizenship appears to be a significant confounder. Prior high altitude mountaineering experience in Nepal has no protective effect. Commercial expeditions in the Nepalese Himalayas have a lower mortality than traditional expeditions, though after controlling for confounding, the difference is not statistically significant. The overall mortality is increasing with increasing peak altitude for expedition members but not for high altitude porters. In the Nepalese Himalayas and in Alaska, a significant decrease of mortality with calendar year was observed. A few suggestions for further work are made at the end of the article.

  15. First estimates of the contribution of CaCO3 precipitation to the release of CO2 to the atmosphere during young sea ice growth

    Science.gov (United States)

    Geilfus, N.-X.; Carnat, G.; Dieckmann, G. S.; Halden, N.; Nehrke, G.; Papakyriakou, T.; Tison, J.-L.; Delille, B.

    2013-01-01

    report measurements of pH, total alkalinity, air-ice CO2 fluxes (chamber method), and CaCO3 content of frost flowers (FF) and thin landfast sea ice. As the temperature decreases, concentration of solutes in the brine skim increases. Along this gradual concentration process, some salts reach their solubility threshold and start precipitating. The precipitation of ikaite (CaCO3.6H2O) was confirmed in the FF and throughout the ice by Raman spectroscopy and X-ray analysis. The amount of ikaite precipitated was estimated to be 25 µmol kg-1 melted FF, in the FF and is shown to decrease from 19 to 15 µmol kg-1 melted ice in the upper part and at the bottom of the ice, respectively. CO2 release due to precipitation of CaCO3 is estimated to be 50 µmol kg-1 melted samples. The dissolved inorganic carbon (DIC) normalized to a salinity of 10 exhibits significant depletion in the upper layer of the ice and in the FF. This DIC loss is estimated to be 2069 µmol kg-1 melted sample and corresponds to a CO2 release from the ice to the atmosphere ranging from 20 to 40 mmol m-2 d-1. This estimate is consistent with flux measurements of air-ice CO2 exchange. Our measurements confirm previous laboratory findings that growing young sea ice acts as a source of CO2 to the atmosphere. CaCO3 precipitation during early ice growth appears to promote the release of CO2 to the atmosphere; however, its contribution to the overall release by newly formed ice is most likely minor.

  16. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    Science.gov (United States)

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  17. Performance of refractometry in quantitative estimation of isotopic concentration of heavy water in nuclear reactor

    International Nuclear Information System (INIS)

    Dhole, K.; Roy, M.; Ghosh, S.; Datta, A.; Tripathy, M.K.; Bose, H.

    2013-01-01

    Highlights: ► Rapid analysis of heavy water samples, with precise temperature control. ► Entire composition range covered. ► Both variations in mole and wt.% of D 2 O in the heavy water sample studied. ► Standard error of calibration and prediction were estimated. - Abstract: The method of refractometry has been investigated for the quantitative estimation of isotopic concentration of heavy water (D 2 O) in a simulated water sample. Feasibility of refractometry as an excellent analytical technique for rapid and non-invasive determination of D 2 O concentration in water samples has been amply demonstrated. Temperature of the samples has been precisely controlled to eliminate the effect of temperature fluctuation on refractive index measurement. The method is found to exhibit a reasonable analytical response to its calibration performance over the purity range of 0–100% D 2 O. An accuracy of below ±1% in the measurement of isotopic purity of heavy water for the entire range could be achieved

  18. Using GRACE to constrain precipitation amount over cold mountainous basins

    Science.gov (United States)

    Behrangi, Ali; Gardner, Alex S.; Reager, John T.; Fisher, Joshua B.

    2017-01-01

    Despite the importance for hydrology and climate-change studies, current quantitative knowledge on the amount and distribution of precipitation in mountainous and high-elevation regions is limited due to instrumental and retrieval shortcomings. Here by focusing on two large endorheic basins in High Mountain Asia, we show that satellite gravimetry (Gravity Recovery and Climate Experiment (GRACE)) can be used to provide an independent estimate of monthly accumulated precipitation using mass balance equation. Results showed that the GRACE-based precipitation estimate has the highest agreement with most of the commonly used precipitation products in summer, but it deviates from them in cold months, when the other products are expected to have larger errors. It was found that most of the products capture about or less than 50% of the total precipitation estimated using GRACE in winter. Overall, Global Precipitation Climatology Project (GPCP) showed better agreement with GRACE estimate than other products. Yet on average GRACE showed 30% more annual precipitation than GPCP in the study basins. In basins of appropriate size with an absence of dense ground measurements, as is a typical case in cold mountainous regions, we find GRACE can be a viable alternative to constrain monthly and seasonal precipitation estimates from other remotely sensed precipitation products that show large bias.

  19. SU-F-I-33: Estimating Radiation Dose in Abdominal Fat Quantitative CT

    Energy Technology Data Exchange (ETDEWEB)

    Li, X; Yang, K; Liu, B [Massachusetts General Hospital, Boston, MA (United States)

    2016-06-15

    Purpose: To compare size-specific dose estimate (SSDE) in abdominal fat quantitative CT with another dose estimate D{sub size,L} that also takes into account scan length. Methods: This study complied with the requirements of the Health Insurance Portability and Accountability Act. At our institution, abdominal fat CT is performed with scan length = 1 cm and CTDI{sub vol} = 4.66 mGy (referenced to body CTDI phantom). A previously developed CT simulation program was used to simulate single rotation axial scans of 6–55 cm diameter water cylinders, and dose integral of the longitudinal dose profile over the central 1 cm length was used to predict the dose at the center of one-cm scan range. SSDE and D{sub size,L} were assessed for 182 consecutive abdominal fat CT examinations with mean water-equivalent diameter (WED) of 27.8 cm ± 6.0 (range, 17.9 - 42.2 cm). Patient age ranged from 18 to 75 years, and weight ranged from 39 to 163 kg. Results: Mean SSDE was 6.37 mGy ± 1.33 (range, 3.67–8.95 mGy); mean D{sub size,L} was 2.99 mGy ± 0.85 (range, 1.48 - 4.88 mGy); and mean D{sub size,L}/SSDE ratio was 0.46 ± 0.04 (range, 0.40 - 0.55). Conclusion: The conversion factors for size-specific dose estimate in AAPM Report No. 204 were generated using 15 - 30 cm scan lengths. One needs to be cautious in applying SSDE to small length CT scans. For abdominal fat CT, SSDE was 80–150% higher than the dose of 1 cm scan length.

  20. Assessment of Evolving TRMM-Based Real-Time Precipitation Estimation Methods and Their Impacts on Hydrologic Prediction in a High-Latitude Basin

    Science.gov (United States)

    Yong, Bin; Hong, Yang; Ren, Li-Liang; Gourley, Jonathan; Huffman, George J.; Chen, Xi; Wang, Wen; Khan, Sadiq I.

    2013-01-01

    The real-time availability of satellite-derived precipitation estimates provides hydrologists an opportunity to improve current hydrologic prediction capability for medium to large river basins. Due to the availability of new satellite data and upgrades to the precipitation algorithms, the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis real-time estimates (TMPA-RT) have been undergoing several important revisions over the past ten years. In this study, the changes of the relative accuracy and hydrologic potential of TMPA-RT estimates over its three major evolving periods were evaluated and inter-compared at daily, monthly and seasonal scales in the high-latitude Laohahe basin in China. Assessment results show that the performance of TMPA-RT in terms of precipitation estimation and streamflow simulation was significantly improved after 3 February 2005. Overestimation during winter months was noteworthy and consistent, which is suggested to be a consequence from interference of snow cover to the passive microwave retrievals. Rainfall estimated by the new version 6 of TMPA-RT starting from 1 October 2008 to present has higher correlations with independent gauge observations and tends to perform better in detecting rain compared to the prior periods, although it suffers larger mean error and relative bias. After a simple bias correction, this latest dataset of TMPA-RT exhibited the best capability in capturing hydrologic response among the three tested periods. In summary, this study demonstrated that there is an increasing potential in the use of TMPA-RT in hydrologic streamflow simulations over its three algorithm upgrade periods, but still with significant challenges during the winter snowing events.

  1. Quantitative estimation of brain atrophy and function with PET and MRI two-dimensional projection images

    International Nuclear Information System (INIS)

    Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko; Toyama, Hinako; Ishii, Kenji; Senda, Michio

    2001-01-01

    The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or 18 F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)

  2. Myocardial blood flow estimates from dynamic contrast-enhanced magnetic resonance imaging: three quantitative methods

    Science.gov (United States)

    Borrazzo, Cristian; Galea, Nicola; Pacilio, Massimiliano; Altabella, Luisa; Preziosi, Enrico; Carnì, Marco; Ciolina, Federica; Vullo, Francesco; Francone, Marco; Catalano, Carlo; Carbone, Iacopo

    2018-02-01

    Dynamic contrast-enhanced cardiovascular magnetic resonance imaging can be used to quantitatively assess the myocardial blood flow (MBF), recovering the tissue impulse response function for the transit of a gadolinium bolus through the myocardium. Several deconvolution techniques are available, using various models for the impulse response. The method of choice may influence the results, producing differences that have not been deeply investigated yet. Three methods for quantifying myocardial perfusion have been compared: Fermi function modelling (FFM), the Tofts model (TM) and the gamma function model (GF), with the latter traditionally used in brain perfusion MRI. Thirty human subjects were studied at rest as well as under cold pressor test stress (submerging hands in ice-cold water), and a single bolus of gadolinium weighing 0.1  ±  0.05 mmol kg-1 was injected. Perfusion estimate differences between the methods were analysed by paired comparisons with Student’s t-test, linear regression analysis, and Bland-Altman plots, as well as also using the two-way ANOVA, considering the MBF values of all patients grouped according to two categories: calculation method and rest/stress conditions. Perfusion estimates obtained by various methods in both rest and stress conditions were not significantly different, and were in good agreement with the literature. The results obtained during the first-pass transit time (20 s) yielded p-values in the range 0.20-0.28 for Student’s t-test, linear regression analysis slopes between 0.98-1.03, and R values between 0.92-1.01. From the Bland-Altman plots, the paired comparisons yielded a bias (and a 95% CI)—expressed as ml/min/g—for FFM versus TM, -0.01 (-0.20, 0.17) or 0.02 (-0.49, 0.52) at rest or under stress respectively, for FFM versus GF, -0.05 (-0.29, 0.20) or  -0.07 (-0.55, 0.41) at rest or under stress, and for TM versus GF, -0.03 (-0.30, 0.24) or  -0.09 (-0.43, 0.26) at rest or under stress. With the

  3. The impact of reflectivity correction and conversion methods to improve precipitation estimation by weather radar for an extreme low-land Mesoscale Convective System

    Science.gov (United States)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-05-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands. For most of the country this led to over 15 hours of near-continuous precipitation, which resulted in total event accumulations exceeding 150 mm in the eastern part of the Netherlands. Such accumulations belong to the largest sums ever recorded in this country and gave rise to local flooding. Measuring precipitation by weather radar within such mesoscale convective systems is known to be a challenge, since measurements are affected by multiple sources of error. For the current event the operational weather radar rainfall product only estimated about 30% of the actual amount of precipitation as measured by rain gauges. In the current presentation we will try to identify what gave rise to such large underestimations. In general weather radar measurement errors can be subdivided into two different groups: 1) errors affecting the volumetric reflectivity measurements taken, and 2) errors related to the conversion of reflectivity values in rainfall intensity and attenuation estimates. To correct for the first group of errors, the quality of the weather radar reflectivity data was improved by successively correcting for 1) clutter and anomalous propagation, 2) radar calibration, 3) wet radome attenuation, 4) signal attenuation and 5) the vertical profile of reflectivity. Such consistent corrections are generally not performed by operational meteorological services. Results show a large improvement in the quality of the precipitation data, however still only ~65% of the actual observed accumulations was estimated. To further improve the quality of the precipitation estimates, the second group of errors are corrected for by making use of disdrometer measurements taken in close vicinity of the radar. Based on these data the parameters of a normalized drop size distribution are estimated for the total event as well as for each precipitation type separately (convective

  4. Estimating bioerosion rate on fossil corals: a quantitative approach from Oligocene reefs (NW Italy)

    Science.gov (United States)

    Silvestri, Giulia

    2010-05-01

    Bioerosion of coral reefs, especially when related to the activity of macroborers, is considered to be one of the major processes influencing framework development in present-day reefs. Macroboring communities affecting both living and dead corals are widely distributed also in the fossil record and their role is supposed to be analogously important in determining flourishing vs demise of coral bioconstructions. Nevertheless, many aspects concerning environmental factors controlling the incidence of bioerosion, shifting in composition of macroboring communities and estimation of bioerosion rate in different contexts are still poorly documented and understood. This study presents an attempt to quantify bioerosion rate on reef limestones characteristic of some Oligocene outcrops of the Tertiary Piedmont Basin (NW Italy) and deposited under terrigenous sedimentation within prodelta and delta fan systems. Branching coral rubble-dominated facies have been recognized as prevailing in this context. Depositional patterns, textures, and the generally low incidence of taphonomic features, such as fragmentation and abrasion, suggest relatively quiet waters where coral remains were deposited almost in situ. Thus taphonomic signatures occurring on corals can be reliably used to reconstruct environmental parameters affecting these particular branching coral assemblages during their life and to compare them with those typical of classical clear-water reefs. Bioerosion is sparsely distributed within coral facies and consists of a limited suite of traces, mostly referred to clionid sponges and polychaete and sipunculid worms. The incidence of boring bivalves seems to be generally lower. Together with semi-quantitative analysis of bioerosion rate along vertical logs and horizontal levels, two quantitative methods have been assessed and compared. These consist in the elaboration of high resolution scanned thin sections through software for image analysis (Photoshop CS3) and point

  5. Quantitative Analysis of VIIRS DNB Nightlight Point Source for Light Power Estimation and Stability Monitoring

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2014-12-01

    Full Text Available The high sensitivity and advanced onboard calibration on the Visible Infrared Imaging Radiometer Suite (VIIRS Day/Night Band (DNB enables accurate measurements of low light radiances which leads to enhanced quantitative applications at night. The finer spatial resolution of DNB also allows users to examine social economic activities at urban scales. Given the growing interest in the use of the DNB data, there is a pressing need for better understanding of the calibration stability and absolute accuracy of the DNB at low radiances. The low light calibration accuracy was previously estimated at a moderate 15% using extended sources while the long-term stability has yet to be characterized. There are also several science related questions to be answered, for example, how the Earth’s atmosphere and surface variability contribute to the stability of the DNB measured radiances; how to separate them from instrument calibration stability; whether or not SI (International System of Units traceable active light sources can be designed and installed at selected sites to monitor the calibration stability, radiometric and geolocation accuracy, and point spread functions of the DNB; furthermore, whether or not such active light sources can be used for detecting environmental changes, such as aerosols. This paper explores the quantitative analysis of nightlight point sources, such as those from fishing vessels, bridges, and cities, using fundamental radiometry and radiative transfer, which would be useful for a number of applications including search and rescue in severe weather events, as well as calibration/validation of the DNB. Time series of the bridge light data are used to assess the stability of the light measurements and the calibration of VIIRS DNB. It was found that the light radiant power computed from the VIIRS DNB data matched relatively well with independent assessments based on the in situ light installations, although estimates have to be

  6. Estimating long-term statistics for annual precipitation for six regions of the United States from tree-ring data

    International Nuclear Information System (INIS)

    Fritts, H.C.; DeWitt, E.; Gordon, G.A.; Hunt, J.H.; Lofgren, G.R.

    1979-12-01

    Spatial anomalies of seasonal precipitation for the United States and southwestern Canada have been reconstructed from 1602 through 1961 using dendrochronological and multivariate techniques on 65 arid-site tree-ring chronologies from western North America. Seasonal reconstructions are averaged to obtain mean annual precipitation values for six regions of importance to the Nuclear Regulatory Commission (NRC) Nuclear Waste Management Program (NWMP). Statistics calculated from the regionally averaged annual values for 25-year and longer intervals show annual precipitation in the seventeenth through nineteenth centuries to be lower than in the twentieth century for three regions in the American Southwest and higher for one region in the Northwest and two regions in the East. The variability of precipitation generally was higher in the past three centuries than in the present century. Twenty-five-year intervals with noteworthy statistics are identified and important results are summarized and tabulated for use in the hydrologic modeling of the NWMP. Additional research is recommended to incorporate temperature and precipitation into a single hydrologic parameter

  7. QUANTITATIVE ESTIMATION OF VOLUMETRIC ICE CONTENT IN FROZEN GROUND BY DIPOLE ELECTROMAGNETIC PROFILING METHOD

    Directory of Open Access Journals (Sweden)

    L. G. Neradovskiy

    2018-01-01

    Full Text Available Volumetric estimation of the ice content in frozen soils is known as one of the main problems in the engineering geocryology and the permafrost geophysics. A new way to use the known method of dipole electromagnetic profiling for the quantitative estimation of the volumetric ice content in frozen soils is discussed. Investigations of foundation of the railroad in Yakutia (i.e. in the permafrost zone were used as an example for this new approach. Unlike the conventional way, in which the permafrost is investigated by its resistivity and constructing of geo-electrical cross-sections, the new approach is aimed at the study of the dynamics of the process of attenuation in the layer of annual heat cycle in the field of high-frequency vertical magnetic dipole. This task is simplified if not all the characteristics of the polarization ellipse are measured but the only one which is the vertical component of the dipole field and can be the most easily measured. Collected data of the measurements were used to analyze the computational errors of the average values of the volumetric ice content from the amplitude attenuation of the vertical component of the dipole field. Note that the volumetric ice content is very important for construction. It is shown that usually the relative error of computation of this characteristic of a frozen soil does not exceed 20% if the works are performed by the above procedure using the key-site methodology. This level of accuracy meets requirements of the design-and-survey works for quick, inexpensive, and environmentally friendly zoning of built-up remote and sparsely populated territories of the Russian permafrost zone according to a category of a degree of the ice content in frozen foundations of engineering constructions.

  8. A quantitative model for estimating mean annual soil loss in cultivated land using 137Cs measurements

    International Nuclear Information System (INIS)

    Yang Hao; Zhao Qiguo; Du Mingyuan; Minami, Katsuyuki; Hatta, Tamao

    2000-01-01

    The radioisotope 137 Cs has been widely used to determine rates of cultivated soil loss, Many calibration relationships (including both empirical relationships and theoretical models) have been employed to estimate erosion rates from the amount of 137 Cs lost from the cultivated soil profile. However, there are important limitations which restrict the reliability of these models, which consider only the uniform distribution of 137 Cs in the plough layer and the depth. As a result, erosion rates they may be overestimated or underestimated. This article presents a quantitative model for the relation the amount of 137 Cs lost from the cultivate soil profile and the rate of soil erosion. According to a mass balance model, during the construction of this model we considered the following parameters: the remaining fraction of the surface enrichment layer (F R ), the thickness of the surface enrichment layer (H s ), the depth of the plough layer (H p ), input fraction of the total 137 Cs fallout deposition during a given year t (F t ), radioactive decay of 137 Cs (k), and sampling year (t). The simulation results showed that the amounts of erosion rates estimated using this model were very sensitive to changes in the values of the parameters F R , H s , and H p . We also observed that the relationship between the rate of soil loss and 137 Cs depletion is neither linear nor logarithmic, and is very complex. Although the model is an improvement over existing approaches to derive calibration relationships for cultivated soil, it requires empirical information on local soil properties and the behavior of 137 Cs in the soil profile. There is clearly still a need for more precise information on the latter aspect and, in particular, on the retention of 137 Cs fallout in the top few millimeters of the soil profile and on the enrichment and depletion effects associated with soil redistribution (i.e. for determining accurate values of F R and H s ). (author)

  9. Precipitation-induced runoff and leaching from milled peat mining mires by peat types : a comparative method for estimating the loading of water bodies during peat pruduction

    OpenAIRE

    Svahnbäck, Lasse

    2007-01-01

    Precipitation-induced runoff and leaching from milled peat mining mires by peat types: a comparative method for estimating the loading of water bodies during peat production. This research project in environmental geology has arisen out of an observed need to be able to predict more accurately the loading of watercourses with detrimental organic substances and nutrients from already existing and planned peat production areas, since the authorities capacity for insisting on such predicti...

  10. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade.

    Science.gov (United States)

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-22

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  11. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade

    Science.gov (United States)

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  12. Quantitative estimation of land surface evapotranspiration in Taiwan based on MODIS data

    Directory of Open Access Journals (Sweden)

    Che-sheng Zhan

    2011-09-01

    Full Text Available Land surface evapotranspiration (ET determines the local and regional water-heat balances. Accurate estimation of regional surface ET provides a scientific basis for the formulation and implementation of water conservation programs. This study set up a table of the momentum roughness length and zero-plane displacement related with land cover and an empirical relationship between land surface temperature and air temperature. A revised quantitative remote sensing ET model, the SEBS-Taiwan model, was developed. Based on Moderate Resolution Imaging Spectroradiometer (MODIS data, SEBS-Taiwan was used to simulate and evaluate the typical actual daily ET values in different seasons of 2002 and 2003 in Taiwan. SEBS-Taiwan generally performed well and could accurately simulate the actual daily ET. The simulated daily ET values matched the observed values satisfactorily. The results indicate that the net regional solar radiation, evaporation ratio, and surface ET values for the whole area of Taiwan are larger in summer than in spring, and larger in autumn than in winter. The results also show that the regional average daily ET values of 2002 are a little higher than those of 2003. Through analysis of the ET values from different types of land cover, we found that forest has the largest ET value, while water areas, bare land, and urban areas have the lowest ET values. Generally, the Northern Taiwan area, including Ilan County, Nantou County, and Hualien County, has higher ET values, while other cities, such as Chiayi, Taichung, and Tainan, have lower ET values.

  13. Quantitative estimation of the right ventricular overloading by thallium-201 myocardial scintigraphy

    International Nuclear Information System (INIS)

    Owada, Kenji; Machii, Kazuo; Tsukahara, Yasunori

    1982-01-01

    Thallium-201 myocardial scintigraphy was performed on 55 patients with various types of right ventricular overloading. The right ventricular (RV) free wall was visualized in 39 out of the 55 patients (71%). The mean values of right ventricular systolic pressure (RVSP) and pulmonary artery mean pressure (PAMP) in the visualized cases (uptakers) were 54.6 +- 24.1 and 30.5 +- 15.3 mmHg, respectively. These values were significantly higher than those of the non-visualized cases (non-uptakers). There were 12 RVSP-''normotensive'' uptakers and 15 PAMP-''normotensive'' uptakers. The RV free wall images were classified into three types according to their morphological features. Type I was predominantly seen in cases of RV pressure overloading, type II in RV volume overloading and type III in combined ventricular overloading. RVSP in the type III group was significantly higher than that in other two groups. The radioactivity ratio in RV free wall and interventricular septum (IVS), the RV/IVS uptake ratio was calculated using left anterior oblique (LAO) view images. The RV/IVS uptake ratio closely correlated with RVSP and PAMP (r = 0.88 and 0.82, respectively). In each group of RV free wall image, there were also close correlations between the RV/IVS uptake ratio and both RVSP and PAMP. Our results indicate that the RV/IVS uptake ratio can be used as a parameter for the semi-quantitative estimation of right ventricular overloading. (author)

  14. Development and testing of transfer functions for generating quantitative climatic estimates from Australian pollen data

    Science.gov (United States)

    Cook, Ellyn J.; van der Kaars, Sander

    2006-10-01

    We review attempts to derive quantitative climatic estimates from Australian pollen data, including the climatic envelope, climatic indicator and modern analogue approaches, and outline the need to pursue alternatives for use as input to, or validation of, simulations by models of past, present and future climate patterns. To this end, we have constructed and tested modern pollen-climate transfer functions for mainland southeastern Australia and Tasmania using the existing southeastern Australian pollen database and for northern Australia using a new pollen database we are developing. After testing for statistical significance, 11 parameters were selected for mainland southeastern Australia, seven for Tasmania and six for northern Australia. The functions are based on weighted-averaging partial least squares regression and their predictive ability evaluated against modern observational climate data using leave-one-out cross-validation. Functions for summer, annual and winter rainfall and temperatures are most robust for southeastern Australia, while in Tasmania functions for minimum temperature of the coldest period, mean winter and mean annual temperature are the most reliable. In northern Australia, annual and summer rainfall and annual and summer moisture indexes are the strongest. The validation of all functions means all can be applied to Quaternary pollen records from these three areas with confidence. Copyright

  15. An Experimental Study for Quantitative Estimation of Rebar Corrosion in Concrete Using Ground Penetrating Radar

    Directory of Open Access Journals (Sweden)

    Md Istiaque Hasan

    2016-01-01

    Full Text Available Corrosion of steel rebar in reinforced concrete is one the most important durability issues in the service life of a structure. In this paper, an investigation is conducted to find out the relationship between the amount of reinforced concrete corrosion and GPR maximum positive amplitude. Accelerated corrosion was simulated in the lab by impressing direct current into steel rebar that was submerged in a 5% salt water solution. The amount of corrosion was varied in the rebars with different levels of mass loss ranging from 0% to 45%. The corroded rebars were then placed into three different oil emulsion tanks having different dielectric properties similar to concrete. The maximum amplitudes from the corroded bars were recorded. A linear relationship between the maximum positive amplitudes and the amount of corrosion in terms of percentage loss of area was observed. It was proposed that the relationship between the GPR maximum amplitude and the amount of corrosion can be used as a basis of a NDE technique of quantitative estimation of corrosion.

  16. Improved infrared precipitation estimation approaches based on k-means clustering: Application to north Algeria using MSG-SEVIRI satellite data

    Science.gov (United States)

    Mokdad, Fatiha; Haddad, Boualem

    2017-06-01

    In this paper, two new infrared precipitation estimation approaches based on the concept of k-means clustering are first proposed, named the NAW-Kmeans and the GPI-Kmeans methods. Then, they are adapted to the southern Mediterranean basin, where the subtropical climate prevails. The infrared data (10.8 μm channel) acquired by MSG-SEVIRI sensor in winter and spring 2012 are used. Tests are carried out in eight areas distributed over northern Algeria: Sebra, El Bordj, Chlef, Blida, Bordj Menael, Sidi Aich, Beni Ourthilane, and Beni Aziz. The validation is performed by a comparison of the estimated rainfalls to rain gauges observations collected by the National Office of Meteorology in Dar El Beida (Algeria). Despite the complexity of the subtropical climate, the obtained results indicate that the NAW-Kmeans and the GPI-Kmeans approaches gave satisfactory results for the considered rain rates. Also, the proposed schemes lead to improvement in precipitation estimation performance when compared to the original algorithms NAW (Nagri, Adler, and Wetzel) and GPI (GOES Precipitation Index).

  17. Quantitative estimation of the extent of alkylation of DNA following treatment of mammalian cells with non-radioactive alkylating agents

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, R.D. (Univ. of Tennessee, Oak Ridge); Regan, J.D.

    1981-01-01

    Alkaline sucrose sedimentation has been used to quantitate phosphotriester formation following treatment of human cells with the monofunctional alkylating agents methyl and ethyl methanesulfonate. These persistent alkaline-labile lesions are not repaired during short-term culture conditions and thus serve as a useful and precise index of the total alkylation of the DNA.Estimates of alkylation by this procedure compare favorably with direct estimates by use of labeled alkylating agents.

  18. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    Science.gov (United States)

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  19. Quantitative estimation of groundwater recharge with special reference to the use of natural radioactive isotopes and hydrological simulation

    International Nuclear Information System (INIS)

    Bredenkamp, D.B.

    1978-01-01

    Methods of quantitative estimation of groundwater recharge have been estimated to 1) illustrate uncertainties associated with methods usually applied 2) indicate some of the simplifying assumptions inherent to a specific method 3) propagate the use of more than one technique in order to improve the reliability of the combined recharge estimate and 4) propose a hydrological model by which the annual recharge and annual variability of recharge could be ascertained. Classical methods such as the water balance equation and flow nets have been reviewed. The use of environmental tritium and radiocarbon have been illustrated as a means of obaining qualitative answers to the occurence of recharge and in revealing the effective mechanism of groundwater recharge through the soil. Quantitative estimation of recharge from the ratio of recharge to storage have been demonstrated for the Kuruman recharge basin. Methods of interpreting tritium profiles in order to obtain a quantitative estimate of recharge have been shown with application of the technique for Rietondale and a dolomitic aquifer in the Western Transvaal. The major part of the thesis has been devoted to the use of hydrological model as a means of estimating groundwater recharge. Subsequent to a general discussion of the conceptual logic, various models have been proposed and tested

  20. Estimation of quantitative levels of diesel exhaust exposure and the health impact in the contemporary Australian mining industry

    NARCIS (Netherlands)

    Peters, Susan; de Klerk, Nicholas; Reid, Alison; Fritschi, Lin; Musk, Aw Bill; Vermeulen, Roel

    2017-01-01

    OBJECTIVES: To estimate quantitative levels of exposure to diesel exhaust expressed by elemental carbon (EC) in the contemporary mining industry and to describe the excess risk of lung cancer that may result from those levels. METHODS: EC exposure has been monitored in Western Australian miners

  1. GENE ACTION AND HERITABILITY ESTIMATES OF QUANTITATIVE CHARACTERS AMONG LINES DERIVED FROM VARIETAL CROSSES OF SOYBEAN

    Directory of Open Access Journals (Sweden)

    Lukman Hakim

    2017-09-01

    Full Text Available The knowledge of genetic action, heritability and genetic variability is useful and permits plant breeder to design efficient breeding strategies in soybean.  The objectives of this study were to determine gene action, genetic variability, heritability and genetic advance of quantitative characters that could be realized through selection of segregation progenies. The F1 population and F2 progenies of six crosses among five soybean varieties were evaluated at Muneng Experimental Station, East Java during the dry season of 2014.  The lines were planted in a randomized block design with four replications.  The seeds of each F1 and F2 progenies and parents were planted in four rows of 3 m long, 40 cm x 20 cm plant spacing, one plant per hill. The result showed that pod number per plant, seed yield, plant yield and harvest index were found to be predominantly controlled by additive gene effects.  Seed size was also controlled by additive gene effects, with small seed dominant to large seed size.  Plant height was found to be controlled by both additive and nonadditive gene effects.  Similarly, days to maturity was due mainly to additive and nonadditive gene effects, with earliness dominant to lateness.  Days to maturity had the highest heritability estimates of 49.3%, followed by seed size (47.0%, harvest index (45.8%, and pod number per plant (45.5%.  Therefore, they could be used in the selection of a high yielding soybean genotype in the F3 generation. 

  2. Mean precipitation estimation, rain gauge network evaluation and quantification of the hydrologic balance in the River Quito basin in Choco, state of Colombia

    International Nuclear Information System (INIS)

    Cordoba, Samir; Zea, Jorge A; Murillo, W

    2006-01-01

    In this work the calculation of the average precipitation in the Quito River basin, state of Choco, Colombia, is presents through diverse techniques, among which are those suggested by Thiessen and those based on the isohyets analysis, in order to select the one appropriate to quantification of rainwater available to the basin. Also included is an estimation of the error with which the average precipitation in the zone studied is fraught when measured, by means of the methodology proposed by Gandin (1970) and Kagan (WMO, 1966), which at the same time allows to evaluate the representativeness of each one of the stations that make up the rain gauge network in the area. The study concludes with a calculation of the hydrologic balance for the Quito river basin based on the pilot procedure suggested in the UNESCO publication on the study of the South America hydrologic balance, from which the great contribution of rainfall to a greatly enhanced run-off may be appreciated

  3. First Evaluation of the Climatological Calibration Algorithm in the Real-time TMPA Precipitation Estimates over Two Basins at High and Low Latitudes

    Science.gov (United States)

    Yong, Bin; Ren, Liliang; Hong, Yang; Gourley, Jonathan; Tian, Yudong; Huffman, George J.; Chen, Xi; Wang, Weiguang; Wen, Yixin

    2013-01-01

    The TRMM Multi-satellite Precipitation Analysis (TMPA) system underwent a crucial upgrade in early 2009 to include a climatological calibration algorithm (CCA) to its realtime product 3B42RT, and this algorithm will continue to be applied in the future Global Precipitation Measurement era constellation precipitation products. In this study, efforts are focused on the comparison and validation of the Version 6 3B42RT estimates before and after the climatological calibration is applied. The evaluation is accomplished using independent rain gauge networks located within the high-latitude Laohahe basin and the low-latitude Mishui basin, both in China. The analyses indicate the CCA can effectively reduce the systematic errors over the low-latitude Mishui basin but misrepresent the intensity distribution pattern of medium-high rain rates. This behavior could adversely affect TMPA's hydrological applications, especially for extreme events (e.g., floods and landslides). Results also show that the CCA tends to perform slightly worse, in particular, during summer and winter, over the high-latitude Laohahe basin. This is possibly due to the simplified calibration-processing scheme in the CCA that directly applies the climatological calibrators developed within 40 degrees latitude to the latitude belts of 40 degrees N-50 degrees N. Caution should therefore be exercised when using the calibrated 3B42RT for heavy rainfall-related flood forecasting (or landslide warning) over high-latitude regions, as the employment of the smooth-fill scheme in the CCA bias correction could homogenize the varying rainstorm characteristics. Finally, this study highlights that accurate detection and estimation of snow at high latitudes is still a challenging task for the future development of satellite precipitation retrievals.

  4. Quantitative estimation of compliance of human systemic veins by occlusion plethysmography with radionuclide

    International Nuclear Information System (INIS)

    Takatsu, Hisato; Gotoh, Kohshi; Suzuki, Takahiko; Ohsumi, Yukio; Yagi, Yasuo; Tsukamoto, Tatsuo; Terashima, Yasushi; Nagashima, Kenshi; Hirakawa, Senri

    1989-01-01

    Volume-pressure relationship and compliance of human systemic veins were estimated quantitatively and noninvasively using radionuclide. The effect of nitroglycerin (NTG) on these parameters was examined. Plethysmography with radionuclide (RN) was performed using the occlusion method on the forearm in 56 patients with various cardiac diseases after RN angiocardiography with 99m Tc-RBC. The RN counts-venous pressure curve was constructed from (1) the changes in radioactivity from region of interest on the forearm that were considered to reflect the changes in the blood volume of the forearm, and (2) the changes in the pressure of the forearm vein (fv) due to venous occlusion. The specific compliance of the forearm veins (Csp.fv; (1/V)·(ΔV/ΔP)) was obtained graphically from this curve at each patient's venous pressure (Pv). Csp.fv was 0.044±0.012 mmHg -1 in class I (mean±SD; n=13), 0.033±0.007 mmHg -1 in class II (n=30), and 0.019±0.007 mmHg -1 in class III (n=13), of the previous NYHA classification of work tolerance. There were significant differences in Csp.fv among the three classes. The systemic venous blood volume (Vsv) was determined by subtracting the central blood volume, measured by RN-angiocardiography, from total blood volume, measured by the indicator dilution method utilizing 131 I-human serum albumin. Systemic venous compliance (Csv) was calculated from Csv=Csp.fv·Vsv. The Csv was 127.2±24.8 ml·mmHg -1 (mean±SD) in class I, 101.1±24.1 ml·mmHg -1 in class II and 62.2±28.1 ml·mmHg -1 in class III. There were significant differences in Csv among the three classes. The class I Csv value was calculated to be 127.2±24.8 ml·mmHg -1 and the Csv/body weight was calculated to be 2.3±0.7 ml·mmHg -1 ·kg -1 of body weight. The administration of NTG increased Csv significantly in all cases. (J.P.N.)

  5. A quantitative framework for estimating risk of collision between marine mammals and boats

    Science.gov (United States)

    Martin, Julien; Sabatier, Quentin; Gowan, Timothy A.; Giraud, Christophe; Gurarie, Eliezer; Calleson, Scott; Ortega-Ortiz, Joel G.; Deutsch, Charles J.; Rycyk, Athena; Koslovsky, Stacie M.

    2016-01-01

    Speed regulations of watercraft in protected areas are designed to reduce lethal collisions with wildlife but can have economic consequences. We present a quantitative framework for investigating the risk of deadly collisions between boats and wildlife.

  6. Estimating the Seasonal Importance of Precipitation to Plant Source Water over Time and Space with Water Isotopes

    Science.gov (United States)

    Nelson, D. B.; Kahmen, A.

    2017-12-01

    The stable isotopic composition of hydrogen and oxygen are physical properties of water molecules that can carry information on their sources or transport histories. This provides a useful tool for assessing the importance of rainfall at different times of the year for plant growth, provided that rainwater values vary over time and that waters do not partially evaporate after deposition. We tested the viability of this approach using data from samples collected at nineteen sites throughout Europe at monthly intervals over two consecutive growing seasons in 2014 and 2015. We compared isotope measurements of plant xylem water with soil water from multiple depths, and measured and modeled precipitation isotope values. Paired analyses of oxygen and hydrogen isotope values were used to screen out a limited number of water samples that were influenced by evaporation, with the majority of all water samples indicating meteoric sources. The isotopic composition of soil and xylem waters varied over the course of an individual growing season, with many trending towards more enriched values, suggesting integration of the plant-relevant water pool at a timescale shorter than the annual mean. We then quantified how soil water residence times varied at each site by calculating the interval between measured xylem water and the most recently preceding match in modeled precipitation isotope values. Results suggest a generally increasing interval between rainfall and plant uptake throughout each year, with source water corresponding to dates in the spring, likely reflecting a combination of spring rain, and mixing with winter and summer precipitation. The seasonally evolving spatial distribution of source water-precipitation lag values was then modeled as a function of location and climatology to develop continental-scale predictions. This spatial portrait of the average date for filling the plant source water pool provides insights on the seasonal importance of rainfall for plant

  7. Comparison of different statistical downscaling methods to estimate changes in hourly extreme precipitation using RCM projections from ENSEMBLES

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Gregersen, Ida Bülow; Rosbjerg, Dan

    2015-01-01

    change method for extreme events, a weather generator combined with a disaggregation method and a climate analogue method. All three methods rely on different assumptions and use different outputs from the regional climate models (RCMs). The results of the three methods point towards an increase...... in extreme precipitation but the magnitude of the change varies depending on the RCM used and the spatial location. In general, a similar mean change is obtained for the three methods. This adds confidence in the results as each method uses different information from the RCMs. The results of this study...

  8. Constraining frequency–magnitude–area relationships for rainfall and flood discharges using radar-derived precipitation estimates: example applications in the Upper and Lower Colorado River basins, USA

    Directory of Open Access Journals (Sweden)

    C. A. Orem

    2016-11-01

    Full Text Available Flood-envelope curves (FECs are useful for constraining the upper limit of possible flood discharges within drainage basins in a particular hydroclimatic region. Their usefulness, however, is limited by their lack of a well-defined recurrence interval. In this study we use radar-derived precipitation estimates to develop an alternative to the FEC method, i.e., the frequency–magnitude–area-curve (FMAC method that incorporates recurrence intervals. The FMAC method is demonstrated in two well-studied US drainage basins, i.e., the Upper and Lower Colorado River basins (UCRB and LCRB, respectively, using Stage III Next-Generation-Radar (NEXRAD gridded products and the diffusion-wave flow-routing algorithm. The FMAC method can be applied worldwide using any radar-derived precipitation estimates. In the FMAC method, idealized basins of similar contributing area are grouped together for frequency–magnitude analysis of precipitation intensity. These data are then routed through the idealized drainage basins of different contributing areas, using contributing-area-specific estimates for channel slope and channel width. Our results show that FMACs of precipitation discharge are power-law functions of contributing area with an average exponent of 0.82 ± 0.06 for recurrence intervals from 10 to 500 years. We compare our FMACs to published FECs and find that for wet antecedent-moisture conditions, the 500-year FMAC of flood discharge in the UCRB is on par with the US FEC for contributing areas of  ∼ 102 to 103 km2. FMACs of flood discharge for the LCRB exceed the published FEC for the LCRB for contributing areas in the range of  ∼ 103 to 104 km2. The FMAC method retains the power of the FEC method for constraining flood hazards in basins that are ungauged or have short flood records, yet it has the added advantage that it includes recurrence-interval information necessary for estimating event probabilities.

  9. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    Science.gov (United States)

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are

  10. Model developments for quantitative estimates of the benefits of the signals on nuclear power plant availability and economics

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    1993-01-01

    A novel framework for quantitative estimates of the benefits of signals on nuclear power plant availability and economics has been developed in this work. The models developed in this work quantify how the perfect signals affect the human operator's success in restoring the power plant to the desired state when it enters undesirable transients. Also, the models quantify the economic benefits of these perfect signals. The models have been applied to the condensate feedwater system of the nuclear power plant for demonstration. (Author)

  11. Reproducibility of CSF quantitative culture methods for estimating rate of clearance in cryptococcal meningitis.

    Science.gov (United States)

    Dyal, Jonathan; Akampurira, Andrew; Rhein, Joshua; Morawski, Bozena M; Kiggundu, Reuben; Nabeta, Henry W; Musubire, Abdu K; Bahr, Nathan C; Williams, Darlisha A; Bicanic, Tihana; Larsen, Robert A; Meya, David B; Boulware, David R

    2016-05-01

    Quantitative cerebrospinal fluid (CSF) cultures provide a measure of disease severity in cryptococcal meningitis. The fungal clearance rate by quantitative cultures has become a primary endpoint for phase II clinical trials. This study determined the inter-assay accuracy of three different quantitative culture methodologies. Among 91 participants with meningitis symptoms in Kampala, Uganda, during August-November 2013, 305 CSF samples were prospectively collected from patients at multiple time points during treatment. Samples were simultaneously cultured by three methods: (1) St. George's 100 mcl input volume of CSF with five 1:10 serial dilutions, (2) AIDS Clinical Trials Group (ACTG) method using 1000, 100, 10 mcl input volumes, and two 1:100 dilutions with 100 and 10 mcl input volume per dilution on seven agar plates; and (3) 10 mcl calibrated loop of undiluted and 1:100 diluted CSF (loop). Quantitative culture values did not statistically differ between St. George-ACTG methods (P= .09) but did for St. George-10 mcl loop (Pmethods was high (r≥0.88). For detecting sterility, the ACTG-method had the highest negative predictive value of 97% (91% St. George, 60% loop), but the ACTG-method had occasional (∼10%) difficulties in quantification due to colony clumping. For CSF clearance rate, St. George-ACTG methods did not differ overall (mean -0.05 ± 0.07 log10CFU/ml/day;P= .14) on a group level; however, individual-level clearance varied. The St. George and ACTG quantitative CSF culture methods produced comparable but not identical results. Quantitative cultures can inform treatment management strategies. © The Author 2016. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. THE EVOLUTION OF ANNUAL MEAN TEMPERATURE AND PRECIPITATION QUANTITY VARIABILITY BASED ON ESTIMATED CHANGES BY THE REGIONAL CLIMATIC MODELS

    Directory of Open Access Journals (Sweden)

    Paula Furtună

    2013-03-01

    Full Text Available Climatic changes are representing one of the major challenges of our century, these being forcasted according to climate scenarios and models, which represent plausible and concrete images of future climatic conditions. The results of climate models comparison regarding future water resources and temperature regime trend can become a useful instrument for decision makers in choosing the most effective decisions regarding economic, social and ecologic levels. The aim of this article is the analysis of temperature and pluviometric variability at the closest grid point to Cluj-Napoca, based on data provided by six different regional climate models (RCMs. Analysed on 30 year periods (2001-2030,2031-2060 and 2061-2090, the mean temperature has an ascending general trend, with great varability between periods. The precipitation expressed trough percentage deviation shows a descending general trend, which is more emphazied during 2031-2060 and 2061-2090.

  13. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    Science.gov (United States)

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  14. Qualitative and quantitative estimations of the effect of geomagnetic field variations on human brain functional state

    International Nuclear Information System (INIS)

    Belisheva, N.K.; Popov, A.N.; Petukhova, N.V.; Pavlova, L.P.; Osipov, K.S.; Tkachenko, S.Eh.; Baranova, T.I.

    1995-01-01

    The comparison of functional dynamics of human brain with reference to qualitative and quantitative characteristics of local geomagnetic field (GMF) variations was conducted. Steady and unsteady states of human brain can be determined: by geomagnetic disturbances before the observation period; by structure and doses of GMF variations; by different combinations of qualitative and quantitative characteristics of GMF variations. Decrease of optimal GMF activity level and the appearance of aperiodic disturbances of GMF can be a reason of unsteady brain's state. 18 refs.; 3 figs

  15. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part II: Evaluation of Estimates Using Independent Data

    Science.gov (United States)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2006-01-01

    Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated

  16. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 2; Evaluation of Estimates Using Independent Data

    Science.gov (United States)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2004-01-01

    Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar

  17. Simultaneous imaging of aurora on small scale in OI (777.4 nm and N21P to estimate energy and flux of precipitation

    Directory of Open Access Journals (Sweden)

    N. Ivchenko

    2009-07-01

    Full Text Available Simultaneous images of the aurora in three emissions, N21P (673.0 nm, OII (732.0 nm and OI (777.4 nm, have been analysed; the ratio of atomic oxygen to molecular nitrogen has been used to provide estimates of the changes in energy and flux of precipitation within scale sizes of 100 m, and with temporal resolution of 32 frames per second. The choice of filters for the imagers is discussed, with particular emphasis on the choice of the atomic oxygen line at 777.4 nm as one of the three emissions measured. The optical measurements have been combined with radar measurements and compared with the results of an auroral model, hence showing that the ratio of emission rates OI/N2 can be used to estimate the energy within the smallest auroral structures. In the event chosen, measurements were made from mainland Norway, near Tromso, (69.6 N, 19.2 E. The peak energies of precipitation were between 1–15 keV. In a narrow curling arc, it was found that the arc filaments resulted from energies in excess of 10 keV and fluxes of approximately 7 mW/m2. These filaments of the order of 100 m in width were embedded in a region of lower energies (about 5–10 keV and fluxes of about 3 mW/m2. The modelling results show that the method promises to be most powerful for detecting low energy precipitation, more prevalent at the higher latitudes of Svalbard where the multispectral imager, known as ASK, is now installed.

  18. Simultaneous imaging of aurora on small scale in OI (777.4 nm and N21P to estimate energy and flux of precipitation

    Directory of Open Access Journals (Sweden)

    B. S. Lanchester

    2009-07-01

    Full Text Available Simultaneous images of the aurora in three emissions, N21P (673.0 nm, OII (732.0 nm and OI (777.4 nm, have been analysed; the ratio of atomic oxygen to molecular nitrogen has been used to provide estimates of the changes in energy and flux of precipitation within scale sizes of 100 m, and with temporal resolution of 32 frames per second. The choice of filters for the imagers is discussed, with particular emphasis on the choice of the atomic oxygen line at 777.4 nm as one of the three emissions measured. The optical measurements have been combined with radar measurements and compared with the results of an auroral model, hence showing that the ratio of emission rates OI/N2 can be used to estimate the energy within the smallest auroral structures. In the event chosen, measurements were made from mainland Norway, near Troms\\o, (69.6 N, 19.2 E. The peak energies of precipitation were between 1–15 keV. In a narrow curling arc, it was found that the arc filaments resulted from energies in excess of 10 keV and fluxes of approximately 7 mW/m2. These filaments of the order of 100 m in width were embedded in a region of lower energies (about 5–10 keV and fluxes of about 3 mW/m2. The modelling results show that the method promises to be most powerful for detecting low energy precipitation, more prevalent at the higher latitudes of Svalbard where the multispectral imager, known as ASK, is now installed.

  19. An operational procedure for precipitable and cloud liquid water estimate in non-raining conditions over sea Study on the assessment of the nonlinear physical inversion algorithm

    CERN Document Server

    Nativi, S; Mazzetti, P

    2004-01-01

    In a previous work, an operative procedure to estimate precipitable and liquid water in non-raining conditions over sea was developed and assessed. The procedure is based on a fast non-linear physical inversion scheme and a forward model; it is valid for most of satellite microwave radiometers and it also estimates water effective profiles. This paper presents two improvements of the procedure: first, a refinement to provide modularity of the software components and portability across different computation system architectures; second, the adoption of the CERN MINUIT minimisation package, which addresses the problem of global minimisation but is computationally more demanding. Together with the increased computational performance that allowed to impose stricter requirements on the quality of fit, these refinements improved fitting precision and reliability, and allowed to relax the requirements on the initial guesses for the model parameters. The re-analysis of the same data-set considered in the previous pap...

  20. Quantitative analysis of the impacts of terrestrial environmental factors on precipitation variation over the Beibu Gulf Economic Zone in Coastal Southwest China

    Science.gov (United States)

    Zhao, Yinjun; Deng, Qiyu; Lin, Qing; Cai, Chunting

    2017-03-01

    Taking the Guangxi Beibu Gulf Economic Zone as the study area, this paper utilizes the geographical detector model to quantify the feedback effects from the terrestrial environment on precipitation variation from 1985 to 2010 with a comprehensive consideration of natural factors (forest coverage rate, vegetation type, terrain, terrestrial ecosystem types, land use and land cover change) and social factors (population density, farmland rate, GDP and urbanization rate). First, we found that the precipitation trend rate in the Beibu Gulf Economic Zone is between -47 and 96 mm/10a. Second, forest coverage rate change (FCRC), urbanization rate change (URC), GDP change (GDPC) and population density change (PDC) have a larger contribution to precipitation change through land-surface feedback, which makes them the leading factors. Third, the human element is found to primarily account for the precipitation changes in this region, as humans are the active media linking and enhancing these impact factors. Finally, it can be concluded that the interaction of impact factor pairs has a significant effect compared to the corresponding single factor on precipitation changes. The geographical detector model offers an analytical framework to reveal the terrestrial factors affecting the precipitation change, which gives direction for future work on regional climate modeling and analyses.

  1. Constraining precipitation amount and distribution over cold regions using GRACE

    Science.gov (United States)

    Behrangi, A.; Reager, J. T., II; Gardner, A. S.; Fisher, J.

    2017-12-01

    Current quantitative knowledge on the amount and distribution of precipitation in high-elevation and high latitude regions is limited due to instrumental and retrieval shortcomings. Here we demonstrate how that satellite gravimetry (Gravity Recovery and Climate Experiment, GRACE) can be used to provide an independent estimate of monthly accumulated precipitation using mass balance. Results showed that the GRACE-based precipitation estimate has the highest agreement with most of the commonly used precipitation products in summer, but it deviates from them in cold months, when the other products are expected to have larger error. We also observed that as near surface temperature decreases products tend to underestimate accumulated precipitation retrieved from GRACE. The analysis performed using various products such as GPCP, GPCC, TRMM, and gridded station data over vast regions in high latitudes and two large endorheic basins in High Mountain Asia. Based on the analysis over High Mountain Asia it was found that most of the products capture about or less than 50% of the total precipitation estimated using GRACE in winter. Overall, GPCP showed better agreement with GRACE estimate than other products. Yet on average GRACE showed 30% more annual precipitation than GPCP in the study basin.

  2. Validation and measurement uncertainty estimation in food microbiology: differences between quantitative and qualitative methods

    Directory of Open Access Journals (Sweden)

    Vesna Režić Dereani

    2010-09-01

    Full Text Available The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the technical competence of the staff involved in the tests, recently are widely introduced in food microbiology laboratories in Croatia. In addition to quality manual introduction, and a lot of general documents, some of the most demanding procedures in routine microbiology laboratories are measurement uncertainty (MU procedures and validation experiment design establishment. Those procedures are not standardized yet even at international level, and they require practical microbiological knowledge, altogether with statistical competence. Differences between validation experiments design for quantitative and qualitative food microbiology analysis are discussed in this research, and practical solutions are shortly described. MU for quantitative determinations is more demanding issue than qualitative MU calculation. MU calculations are based on external proficiency testing data and internal validation data. In this paper, practical schematic descriptions for both procedures are shown.

  3. Stereological estimation of nuclear volume and other quantitative histopathological parameters in the prognostic evaluation of supraglottic laryngeal squamous cell carcinoma

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bennedbaek, O; Pilgaard, J

    1989-01-01

    The aim of this study was to investigate various approaches to the grading of malignancy in pre-treatment biopsies from patients with supraglottic laryngeal squamous cell carcinoma. The prospects of objective malignancy grading based on stereological estimation of the volume-weighted mean nuclear...... observers of the latter was poor in the material which consisted of 35 biopsy specimens. Unbiased estimates of nuclear Vv were on the average 385 microns3 (CV = 0.44), with more than 90% of the associated variance attributable to differences in nuclear Vv among individual lesions. Nuclear Vv was positively....... None of the investigated categorical and quantitative parameters (cutoff points = means) reached the level of significance with respect to prognostic value. However, nuclear Vv showed the best information concerning survival (2p = 0.08), and this estimator offers optimal features for objective...

  4. Semi-quantitative estimation by IR of framework, extraframework and defect Al species of HBEA zeolites.

    Science.gov (United States)

    Marques, João P; Gener, Isabelle; Ayrault, Philippe; Lopes, José M; Ribeiro, F Ramôa; Guisnet, Michel

    2004-10-21

    A simple method based on the characterization (composition, Bronsted and Lewis acidities) of acid treated HBEA zeolites was developed for estimating the concentrations of framework, extraframework and defect Al species.

  5. Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation

    OpenAIRE

    Pillai S; Singhvi I

    2008-01-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydro...

  6. Quantitative estimation of defects from measurement obtained by remote field eddy current inspection

    International Nuclear Information System (INIS)

    Davoust, M.E.; Fleury, G.

    1999-01-01

    Remote field eddy current technique is used for dimensioning grooves that may occurs in ferromagnetic pipes. This paper proposes a method to estimate the depth and the length of corrosion grooves from measurement of a pick-up coil signal phase at different positions close to the defect. Grooves dimensioning needs the knowledge of the physical relation between measurements and defect dimensions. So, finite element calculations are performed to obtain a parametric algebraic function of the physical phenomena. By means of this model and a previously defined general approach, an estimate of groove size may be given. In this approach, algebraic function parameters and groove dimensions are linked through a polynomial function. In order to validate this estimation procedure, a statistical study has been performed. The approach is proved to be suitable for real measurements. (authors)

  7. Observation-based Quantitative Uncertainty Estimation for Realtime Tsunami Inundation Forecast using ABIC and Ensemble Simulation

    Science.gov (United States)

    Takagawa, T.

    2016-12-01

    An ensemble forecasting scheme for tsunami inundation is presented. The scheme consists of three elemental methods. The first is a hierarchical Bayesian inversion using Akaike's Bayesian Information Criterion (ABIC). The second is Montecarlo sampling from a probability density function of multidimensional normal distribution. The third is ensamble analysis of tsunami inundation simulations with multiple tsunami sources. Simulation based validation of the model was conducted. A tsunami scenario of M9.1 Nankai earthquake was chosen as a target of validation. Tsunami inundation around Nagoya Port was estimated by using synthetic tsunami waveforms at offshore GPS buoys. The error of estimation of tsunami inundation area was about 10% even if we used only ten minutes observation data. The estimation accuracy of waveforms on/off land and spatial distribution of maximum tsunami inundation depth is demonstrated.

  8. THE QUADRANTS METHOD TO ESTIMATE QUANTITATIVE VARIABLES IN MANAGEMENT PLANS IN THE AMAZON

    Directory of Open Access Journals (Sweden)

    Gabriel da Silva Oliveira

    2015-12-01

    Full Text Available This work aimed to evaluate the accuracy in estimates of abundance, basal area and commercial volume per hectare, by the quadrants method applied to an area of 1.000 hectares of rain forest in the Amazon. Samples were simulated by random and systematic process with different sample sizes, ranging from 100 to 200 sampling points. The amounts estimated by the samples were compared with the parametric values recorded in the census. In the analysis we considered as the population all trees with diameter at breast height equal to or greater than 40 cm. The quadrants method did not reach the desired level of accuracy for the variables basal area and commercial volume, overestimating the observed values recorded in the census. However, the accuracy of the estimates of abundance, basal area and commercial volume was satisfactory for applying the method in forest inventories for management plans in the Amazon.

  9. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    Science.gov (United States)

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely

  10. Methods for the quantitative comparison of molecular estimates of clade age and the fossil record.

    Science.gov (United States)

    Clarke, Julia A; Boyd, Clint A

    2015-01-01

    Approaches quantifying the relative congruence, or incongruence, of molecular divergence estimates and the fossil record have been limited. Previously proposed methods are largely node specific, assessing incongruence at particular nodes for which both fossil data and molecular divergence estimates are available. These existing metrics, and other methods that quantify incongruence across topologies including entirely extinct clades, have so far not taken into account uncertainty surrounding both the divergence estimates and the ages of fossils. They have also treated molecular divergence estimates younger than previously assessed fossil minimum estimates of clade age as if they were the same as cases in which they were older. However, these cases are not the same. Recovered divergence dates younger than compared oldest known occurrences require prior hypotheses regarding the phylogenetic position of the compared fossil record and standard assumptions about the relative timing of morphological and molecular change to be incorrect. Older molecular dates, by contrast, are consistent with an incomplete fossil record and do not require prior assessments of the fossil record to be unreliable in some way. Here, we compare previous approaches and introduce two new descriptive metrics. Both metrics explicitly incorporate information on uncertainty by utilizing the 95% confidence intervals on estimated divergence dates and data on stratigraphic uncertainty concerning the age of the compared fossils. Metric scores are maximized when these ranges are overlapping. MDI (minimum divergence incongruence) discriminates between situations where molecular estimates are younger or older than known fossils reporting both absolute fit values and a number score for incompatible nodes. DIG range (divergence implied gap range) allows quantification of the minimum increase in implied missing fossil record induced by enforcing a given set of molecular-based estimates. These metrics are used

  11. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    DEFF Research Database (Denmark)

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed mod...

  12. Soil carbon storage estimation in a forested watershed using quantitative soil-landscape modeling

    Science.gov (United States)

    James A. Thompson; Randall K. Kolka

    2005-01-01

    Carbon storage in soils is important to forest ecosystems. Moreover, forest soils may serve as important C sinks for ameliorating excess atmospheric CO2. Spatial estimates of soil organic C (SOC) storage have traditionally relied upon soil survey maps and laboratory characterization data. This approach does not account for inherent variability...

  13. A subagging regression method for estimating the qualitative and quantitative state of groundwater

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young

    2017-08-01

    A subsample aggregating (subagging) regression (SBR) method for the analysis of groundwater data pertaining to trend-estimation-associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of other methods, and the uncertainties are reasonably estimated; the others have no uncertainty analysis option. To validate further, actual groundwater data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by both SBR and GPR regardless of Gaussian or non-Gaussian skewed data. However, it is expected that GPR has a limitation in applications to severely corrupted data by outliers owing to its non-robustness. From the implementations, it is determined that the SBR method has the potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data such as the groundwater level and contaminant concentration.

  14. Estimation of precipitation rates by measurements of {sup 36}Cl in the GRIP ice core with the PSI/ETH tandem accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, G.; Baumgartner, S.; Beer, J. [EAWAG, Duebendorf (Switzerland); Synal, H.A. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Suter, M. [Eidgenoessische Technische Hochschule, Zurich (Switzerland)

    1997-09-01

    Within the European Greenland ice core project (GRIP) {sup 36}Cl AMS measurements have been performed on ice core samples from Summit (Greenland, 73{sup o}N, 37{sup o}W). Most data analysed so far are from the lower part of the ice core. The {sup 36}Cl concentration is well correlated with {delta}{sup 18}O, which is considered as a proxy for paleotemperatures. Assuming that the deposition rate of radionuclides is independent of {delta}{sup 18}O, {sup 36}Cl is used to estimate the relationship between accumulation and {delta}{sup 18}O. The results confirm that the rapid changes of {delta}{sup 18}O, the so-called Dansgaard-Oeschger events, are also reflected in the precipitation rate. (author) 1 fig., 3 refs.

  15. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    Science.gov (United States)

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.

  16. Theoretical implications of quantitative properties of interval timing and probability estimation in mouse and rat.

    Science.gov (United States)

    Kheifets, Aaron; Freestone, David; Gallistel, C R

    2017-07-01

    In three experiments with mice ( Mus musculus ) and rats (Rattus norvigicus), we used a switch paradigm to measure quantitative properties of the interval-timing mechanism. We found that: 1) Rodents adjusted the precision of their timed switches in response to changes in the interval between the short and long feed latencies (the temporal goalposts). 2) The variability in the timing of the switch response was reduced or unchanged in the face of large trial-to-trial random variability in the short and long feed latencies. 3) The adjustment in the distribution of switch latencies in response to changes in the relative frequency of short and long trials was sensitive to the asymmetry in the Kullback-Leibler divergence. The three results suggest that durations are represented with adjustable precision, that they are timed by multiple timers, and that there is a trial-by-trial (episodic) record of feed latencies in memory. © 2017 Society for the Experimental Analysis of Behavior.

  17. Evaluation of radar-derived precipitation estimates using runoff simulation : report for the NFR Energy Norway funded project 'Utilisation of weather radar data in atmospheric and hydrological models'

    Energy Technology Data Exchange (ETDEWEB)

    Abdella, Yisak; Engeland, Kolbjoern; Lepioufle, Jean-Marie

    2012-11-01

    This report presents the results from the project called 'Utilisation of weather radar data in atmospheric and hydrological models' funded by NFR and Energy Norway. Three precipitation products (radar-derived, interpolated and combination of the two) were generated as input for hydrological models. All the three products were evaluated by comparing the simulated and observed runoff at catchments. In order to expose any bias in the precipitation inputs, no precipitation correction factors were applied. Three criteria were used to measure the performance: Nash, correlation coefficient, and bias. The results shows that the simulations with the combined precipitation input give the best performance. We also see that the radar-derived precipitation estimates give reasonable runoff simulation even without a region specific parameters for the Z-R relationship. All the three products resulted in an underestimation of the estimated runoff, revealing a systematic bias in measurements (e.g. catch deficit, orographic effects, Z-R relationships) that can be improved. There is an important potential of using radar-derived precipitation for simulation of runoff, especially in catchments without precipitation gauges inside.(Author)

  18. Ultrasonic 3-D Vector Flow Method for Quantitative In Vivo Peak Velocity and Flow Rate Estimation

    DEFF Research Database (Denmark)

    Holbek, Simon; Ewertsen, Caroline; Bouzari, Hamed

    2017-01-01

    Current clinical ultrasound (US) systems are limited to show blood flow movement in either 1-D or 2-D. In this paper, a method for estimating 3-D vector velocities in a plane using the transverse oscillation method, a 32×32 element matrix array, and the experimental US scanner SARUS is presented...... is validated in two phantom studies, where flow rates are measured in a flow-rig, providing a constant parabolic flow, and in a straight-vessel phantom ( ∅=8 mm) connected to a flow pump capable of generating time varying waveforms. Flow rates are estimated to be 82.1 ± 2.8 L/min in the flow-rig compared...

  19. Greenhouse effect and waste sector in Italy: Analysis and quantitative estimates of methane emissions

    International Nuclear Information System (INIS)

    Pizzullo, Marcello; Tognotti, Leonardo

    1997-01-01

    Methane is the most important atmospheric gas with a considerable effect on climate change after carbon dioxide. In this work methane emissions from waste have been evaluated. Estimates include emissions resulting from anaerobic degradation of landfill municipal solid waste and industrial and municipal wastewater anaerobic treatments. The adopted methodology follows specific guidelines carried out by IPCC (Intergovernamental Panel on Climate Change), the scientific reference commission for the Framework Convention on Climate Change subscribed in 1992 during the Earth Summit in Rio de Janeiro. Some factors used in the methodology for landfill emissions have been modified and adapted to the italian situation. The estimate of emission resulting from industrial wastewater anaerobic treatments has required preliminary evaluation of annual wastewater quantities produced by some significant industrial sectors

  20. Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.

    Science.gov (United States)

    Pillai, S; Singhvi, I

    2008-09-01

    Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.

  1. Distribution and Quantitative Estimates of Variant Creutzfeldt-Jakob Disease Prions in Tissues of Clinical and Asymptomatic Patients.

    Science.gov (United States)

    Douet, Jean Y; Lacroux, Caroline; Aron, Naima; Head, Mark W; Lugan, Séverine; Tillier, Cécile; Huor, Alvina; Cassard, Hervé; Arnold, Mark; Beringue, Vincent; Ironside, James W; Andréoletti, Olivier

    2017-06-01

    In the United-Kingdom, ≈1 of 2,000 persons could be infected with variant Creutzfeldt-Jakob disease (vCJD). Therefore, risk of transmission of vCJD by medical procedures remains a major concern for public health authorities. In this study, we used in vitro amplification of prions by protein misfolding cyclic amplification (PMCA) to estimate distribution and level of the vCJD agent in 21 tissues from 4 patients who died of clinical vCJD and from 1 asymptomatic person with vCJD. PMCA identified major levels of vCJD prions in a range of tissues, including liver, salivary gland, kidney, lung, and bone marrow. Bioassays confirmed that the quantitative estimate of levels of vCJD prion accumulation provided by PMCA are indicative of vCJD infectivity levels in tissues. Findings provide critical data for the design of measures to minimize risk for iatrogenic transmission of vCJD.

  2. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    Science.gov (United States)

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  3. NOAA Climate Data Record (CDR) of Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN-CDR), Version 1 Revision 1

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — PERSIANN Precipitation Climate Data Record (PERSIANN-CDR) is a daily quasi-global precipitation product for the period of 1982 to 2011. The data covers from 60...

  4. Optimisation of information influences on problems of consequences of Chernobyl accident and quantitative criteria for estimation of information actions

    International Nuclear Information System (INIS)

    Sobaleu, A.

    2004-01-01

    Consequences of Chernobyl NPP accident still very important for Belarus. About 2 million Byelorussians live in the districts polluted by Chernobyl radionuclides. Modern approaches to the decision of after Chernobyl problems in Belarus assume more active use of information and educational actions to grow up a new radiological culture. It will allow to reduce internal doze of radiation without spending a lot of money and other resources. Experience of information work with the population affected by Chernobyl since 1986 till 2004 has shown, that information and educational influences not always reach the final aim - application of received knowledge on radiating safety in practice and changing the style of life. If we take into account limited funds and facilities, we should optimize information work. The optimization can be achieved on the basis of quantitative estimations of information actions effectiveness. It is possible to use two parameters for this quantitative estimations: 1) increase in knowledge of the population and experts on the radiating safety, calculated by new method based on applied theory of the information (Mathematical Theory of Communication) by Claude E. Shannon and 2) reduction of internal doze of radiation, calculated on the basis of measurements on human irradiation counter (HIC) before and after an information or educational influence. (author)

  5. Quantitative estimation of myocardial thickness by the wall thickness map with Tl-201 myocardial SPECT and its clinical use

    International Nuclear Information System (INIS)

    Sekiai, Yasuhiro; Sawai, Michihiko; Murayama, Susumu

    1988-01-01

    To estimate the wall thickness of left ventricular myocardium objectively and quantitatively, we adopted the device of wall thickness map (WTM) with Tl-201 myocardial SPECT. For validation on measuring left ventricular wall thickness with SPECT, fundamental studies were carried out with phantom models, and clinical studies were performed in 10 cases comparing the results from SPECT with those in echocardiography. To draw the WTM, left ventricular wall thickness was measured using the cut off method from SPECT images obtained at 5.6 mm intervals from the base and middle of left ventricle: short-axis image for the base and middle of left ventricle and vertical and horizontal long-axis images for the apical region. Wall thickness was defined from the number of pixel above the cut off level. Results of fundamental studies disclosed that it is impossible to evaluate the thickness of less than 10 mm by Tl-201 myocardial SPECT but possible to discriminate wall thickness of 10 mm, 15 mm, and 20 mm by Tl-201 myocardial SPECT. Echocardiographic results supported the validity of WTM, showing a good linear correlation (r = 0.96) between two methods on measuring wall thickness of left ventricle. We conclude that the WTM applied in this report may be useful for objective and quantitative estimation of myocardial hypertrophy. (author)

  6. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    International Nuclear Information System (INIS)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A

    2011-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ∼0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  7. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Energy Technology Data Exchange (ETDEWEB)

    Wang Dongxu; Mackie, T Rockwell; Tome, Wolfgang A, E-mail: tome@humonc.wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI 53705 (United States)

    2011-02-07

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of {approx}0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy.

  8. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Science.gov (United States)

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  9. Raman spectroscopy of human skin: looking for a quantitative algorithm to reliably estimate human age

    Science.gov (United States)

    Pezzotti, Giuseppe; Boffelli, Marco; Miyamori, Daisuke; Uemura, Takeshi; Marunaka, Yoshinori; Zhu, Wenliang; Ikegaya, Hiroshi

    2015-06-01

    The possibility of examining soft tissues by Raman spectroscopy is challenged in an attempt to probe human age for the changes in biochemical composition of skin that accompany aging. We present a proof-of-concept report for explicating the biophysical links between vibrational characteristics and the specific compositional and chemical changes associated with aging. The actual existence of such links is then phenomenologically proved. In an attempt to foster the basics for a quantitative use of Raman spectroscopy in assessing aging from human skin samples, a precise spectral deconvolution is performed as a function of donors' ages on five cadaveric samples, which emphasizes the physical significance and the morphological modifications of the Raman bands. The outputs suggest the presence of spectral markers for age identification from skin samples. Some of them appeared as authentic "biological clocks" for the apparent exactness with which they are related to age. Our spectroscopic approach yields clear compositional information of protein folding and crystallization of lipid structures, which can lead to a precise identification of age from infants to adults. Once statistically validated, these parameters might be used to link vibrational aspects at the molecular scale for practical forensic purposes.

  10. Quantitative estimation of pulegone in Mentha longifolia growing in Saudi Arabia. Is it safe to use?

    Science.gov (United States)

    Alam, Prawez; Saleh, Mahmoud Fayez; Abdel-Kader, Maged Saad

    2016-03-01

    Our TLC study of the volatile oil isolated from Mentha longifolia showed a major UV active spot with higher Rf value than menthol. Based on the fact that the components of the oil from same plant differ quantitatively due to environmental conditions, the major spot was isolated using different chromatographic techniques and identified by spectroscopic means as pulegone. The presence of pulegone in M. longifolia, a plant widely used in Saudi Arabia, raised a hot debate due to its known toxicity. The Scientific Committee on Food, Health & Consumer Protection Directorate General, European Commission set a limit for the presence of pulegone in foodstuffs and beverages. In this paper we attempted to determine the exact amount of pulegone in different extracts, volatile oil as well as tea flavoured with M. longifolia (Habak) by densitometric HPTLC validated methods using normal phase (Method I) and reverse phase (Method II) TLC plates. The study indicated that the style of use of Habak in Saudi Arabia resulted in much less amount of pulegone than the allowed limit.

  11. Quantitative Estimation of Temperature Variations in Plantar Angiosomes: A Study Case for Diabetic Foot

    Directory of Open Access Journals (Sweden)

    H. Peregrina-Barreto

    2014-01-01

    Full Text Available Thermography is a useful tool since it provides information that may help in the diagnostic of several diseases in a noninvasive and fast way. Particularly, thermography has been applied in the study of the diabetic foot. However, most of these studies report only qualitative information making it difficult to measure significant parameters such as temperature variations. These variations are important in the analysis of the diabetic foot since they could bring knowledge, for instance, regarding ulceration risks. The early detection of ulceration risks is considered an important research topic in the medicine field, as its objective is to avoid major complications that might lead to a limb amputation. The absence of symptoms in the early phase of the ulceration is conceived as the main disadvantage to provide an opportune diagnostic in subjects with neuropathy. Since the relation between temperature and ulceration risks is well established in the literature, a methodology that obtains quantitative temperature differences in the plantar area of the diabetic foot to detect ulceration risks is proposed in this work. Such methodology is based on the angiosome concept and image processing.

  12. Quantitative estimation of intestinal dilation as a predictor of obstruction in the dog.

    Science.gov (United States)

    Graham, J P; Lord, P F; Harrison, J M

    1998-11-01

    Mechanical obstruction is a major differential diagnosis for dogs presented with gastrointestinal problems. Small intestinal dilation is a cardinal sign of obstruction but its recognition depends upon the observer's experience and anecdotally derived parameters for normal small intestinal diameter. The objective of this study was to formulate a quantitative index for normal intestinal diameter and evaluate its usefulness in predicting small intestinal obstruction. The material consisted of survey abdominal radiographs of 50 normal dogs, 44 cases of intestinal obstruction and 86 patients which subsequently had an upper gastrointestinal examination. A ratio of the maximum small intestinal diameter (SI) and the height of the body of the fifth lumbar vertebra at its narrowest point (L5) was used, and a logistic regression model employed to determine the probability of an obstruction existing with varying degrees of intestinal dilation. A value of 1.6 for SI/L5 is recommended as the upper limit of normal intestinal diameter for clinical use. The model showed that obstruction is very unlikely if the SI/L5 value is less than this. Higher values were significantly associated with obstruction.

  13. Quantitative estimation of intestinal dilation as a predictor of obstruction in the dog

    International Nuclear Information System (INIS)

    Graham, J.P.; Lord, P.F.; Harrison, J.M.

    1998-01-01

    Mechanical obstruction is a major differential diagnosis for dogs presented with gastrointestinal problems. Small intestinal dilation is a cardinal sign of obstruction but its recognition depends upon the observer's experience and anecdotally derived parameters for normal small intestinal diameter. The objective of this study was to formulate a quantitative index for normal intestinal diameter and evaluate its usefulness in predicting small intestinal obstruction. The material consisted of survey abdominal radiographs of 50 normal dogs, 44 cases of intestinal obstruction and 86 patients which subsequently had an upper gastrointestinal examination. A ratio of the maximum small intestinal diameter (SI) and the height of the body of the fifth lumbar vertebra at its narrowest point (L5) was used, and a logistic regression model employed to determine the probability of an obstruction existing with varying degrees of intestinal dilation. A value of 1.6 for SI/L5 is recommended as the upper limit of normal intestinal diameter for clinical use. The model showed that obstruction is very unlikely if the SI/L5 value is less than this. Higher values were significantly associated with obstruction

  14. Estimation of quantitative levels of diesel exhaust exposure and the health impact in the contemporary Australian mining industry.

    Science.gov (United States)

    Peters, Susan; de Klerk, Nicholas; Reid, Alison; Fritschi, Lin; Musk, Aw Bill; Vermeulen, Roel

    2017-03-01

    To estimate quantitative levels of exposure to diesel exhaust expressed by elemental carbon (EC) in the contemporary mining industry and to describe the excess risk of lung cancer that may result from those levels. EC exposure has been monitored in Western Australian miners since 2003. Mixed-effects models were used to estimate EC levels for five surface and five underground occupation groups (as a fixed effect) and specific jobs within each group (as a random effect). Further fixed effects included sampling year and duration, and mineral mined. On the basis of published risk functions, we estimated excess lifetime risk of lung cancer mortality for several employment scenarios. Personal EC measurements (n=8614) were available for 146 different jobs at 124 mine sites. The mean estimated EC exposure level for surface occupations in 2011 was 14 µg/m 3 for 12 hour shifts. Levels for underground occupation groups ranged from 18 to 44 µg/m 3 . Underground diesel loader operators had the highest exposed specific job: 59 µg/m 3 . A lifetime career (45 years) as a surface worker or underground miner, experiencing exposure levels as estimated for 2011 (14 and 44 µg/m 3 EC), was associated with 5.5 and 38 extra lung cancer deaths per 1000 males, respectively. EC exposure levels in the contemporary Australian mining industry are still substantial, particularly for underground workers. The estimated excess numbers of lung cancer deaths associated with these exposures support the need for implementation of stringent occupational exposure limits for diesel exhaust. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  15. Quantitative falls risk estimation through multi-sensor assessment of standing balance.

    Science.gov (United States)

    Greene, Barry R; McGrath, Denise; Walsh, Lorcan; Doheny, Emer P; McKeown, David; Garattini, Chiara; Cunningham, Clodagh; Crosby, Lisa; Caulfield, Brian; Kenny, Rose A

    2012-12-01

    Falls are the most common cause of injury and hospitalization and one of the principal causes of death and disability in older adults worldwide. Measures of postural stability have been associated with the incidence of falls in older adults. The aim of this study was to develop a model that accurately classifies fallers and non-fallers using novel multi-sensor quantitative balance metrics that can be easily deployed into a home or clinic setting. We compared the classification accuracy of our model with an established method for falls risk assessment, the Berg balance scale. Data were acquired using two sensor modalities--a pressure sensitive platform sensor and a body-worn inertial sensor, mounted on the lower back--from 120 community dwelling older adults (65 with a history of falls, 55 without, mean age 73.7 ± 5.8 years, 63 female) while performing a number of standing balance tasks in a geriatric research clinic. Results obtained using a support vector machine yielded a mean classification accuracy of 71.52% (95% CI: 68.82-74.28) in classifying falls history, obtained using one model classifying all data points. Considering male and female participant data separately yielded classification accuracies of 72.80% (95% CI: 68.85-77.17) and 73.33% (95% CI: 69.88-76.81) respectively, leading to a mean classification accuracy of 73.07% in identifying participants with a history of falls. Results compare favourably to those obtained using the Berg balance scale (mean classification accuracy: 59.42% (95% CI: 56.96-61.88)). Results from the present study could lead to a robust method for assessing falls risk in both supervised and unsupervised environments.

  16. Quantitative falls risk estimation through multi-sensor assessment of standing balance

    International Nuclear Information System (INIS)

    Greene, Barry R; McGrath, Denise; Walsh, Lorcan; Doheny, Emer P; McKeown, David; Garattini, Chiara; Cunningham, Clodagh; Crosby, Lisa; Caulfield, Brian; Kenny, Rose A

    2012-01-01

    Falls are the most common cause of injury and hospitalization and one of the principal causes of death and disability in older adults worldwide. Measures of postural stability have been associated with the incidence of falls in older adults. The aim of this study was to develop a model that accurately classifies fallers and non-fallers using novel multi-sensor quantitative balance metrics that can be easily deployed into a home or clinic setting. We compared the classification accuracy of our model with an established method for falls risk assessment, the Berg balance scale. Data were acquired using two sensor modalities—a pressure sensitive platform sensor and a body-worn inertial sensor, mounted on the lower back—from 120 community dwelling older adults (65 with a history of falls, 55 without, mean age 73.7 ± 5.8 years, 63 female) while performing a number of standing balance tasks in a geriatric research clinic. Results obtained using a support vector machine yielded a mean classification accuracy of 71.52% (95% CI: 68.82–74.28) in classifying falls history, obtained using one model classifying all data points. Considering male and female participant data separately yielded classification accuracies of 72.80% (95% CI: 68.85–77.17) and 73.33% (95% CI: 69.88–76.81) respectively, leading to a mean classification accuracy of 73.07% in identifying participants with a history of falls. Results compare favourably to those obtained using the Berg balance scale (mean classification accuracy: 59.42% (95% CI: 56.96–61.88)). Results from the present study could lead to a robust method for assessing falls risk in both supervised and unsupervised environments. (paper)

  17. Estimating soil hydrological response by combining precipitation-runoff modeling and hydro-functional soil homogeneous units

    Science.gov (United States)

    Aroca-Jimenez, Estefania; Bodoque, Jose Maria; Diez-Herrero, Andres

    2015-04-01

    Flash floods constitute one of the natural hazards better able to generate risk, particularly with regard to Society. The complexity of this process and its dependence on various factors related to the characteristics of the basin and rainfall make flash floods are difficult to characterize in terms of their hydrological response.To do this, it is essential a proper analysis of the so called 'initial abstractions'. Among all of these processes, infiltration plays a crucial role in explaining the occurrence of floods in mountainous basins.For its characterization the Green-Ampt model , which depends on the characteristics of rainfall and physical properties of soil has been used in this work.This is a method enabling to simulate floods in mountainous basins where hydrological response is sub-daily. However, it has the disadvantage that it is based on physical properties of soil which have a high spatial variability. To address this difficulty soil mapping units have been delineated according to the geomorphological landforms and elements. They represent hydro-functional mapping units that are theoretically homogeneous from the perspective of the pedostructure parameters of the pedon. So the soil texture of each homogeneous group of landform units was studied by granulometric analyses using standarized sieves and Sedigraph devices. In addition, uncertainty associated with the parameterization of the Green-Ampt method has been estimated by implementing a Monte Carlo approach, which required assignment of the proper distribution function to each parameter.The suitability of this method was contrasted by calibrating and validating a hydrological model, in which the generation of runoff hydrograph has been simulated using the SCS unit hydrograph (HEC-GeoHMS software), while flood wave routing has been characterized using the Muskingum-Cunge method. Calibration and validation of the model was from the use of an automatic routine based on the employ of the search algorithm

  18. Quantitative estimation of lithofacies from seismic data in a tertiary turbidite system in the North Sea

    Energy Technology Data Exchange (ETDEWEB)

    Joerstad, A.K.; Avseth, P.Aa; Mukerji, T.; Mavko, G.; Granli, J.R.

    1998-12-31

    Deep water clastic systems and associated turbidite reservoirs are often characterized by very complex sand distributions and reservoir description based on conventional seismic and well-log stratigraphic analysis may be very uncertain in these depositional environments. There is shown that reservoirs in turbidite systems have been produced very inefficiently in conventional development. More than 70% of the mobile oil is commonly left behind, because of the heterogeneous nature of these reservoirs. In this study there is examined a turbidite system in the North Sea with five available wells and a 3-D seismic near and far offset stack to establish most likely estimates of facies and pore fluid within the cube. 5 figs.

  19. Precipitation Indices Low Countries

    Science.gov (United States)

    van Engelen, A. F. V.; Ynsen, F.; Buisman, J.; van der Schrier, G.

    2009-09-01

    Since 1995, KNMI published a series of books(1), presenting an annual reconstruction of weather and climate in the Low Countries, covering the period AD 763-present, or roughly, the last millennium. The reconstructions are based on the interpretation of documentary sources predominantly and comparison with other proxies and instrumental observations. The series also comprises a number of classifications. Amongst them annual classifications for winter and summer temperature and for winter and summer dryness-wetness. The classification of temperature have been reworked into peer reviewed (2) series (AD 1000-present) of seasonal temperatures and temperature indices, the so called LCT (Low Countries Temperature) series, now incorporated in the Millennium databases. Recently we started a study to convert the dryness-wetness classifications into a series of precipitation; the so called LCP (Low Countries Precipitation) series. A brief outline is given here of the applied methodology and preliminary results. The WMO definition for meteorological drought has been followed being that a period is called wet respectively dry when the amount of precipitation is considerable more respectively less than usual (normal). To gain a more quantitative insight for four locations, geographically spread over the Low Countries area (De Bilt, Vlissingen, Maastricht and Uccle), we analysed the statistics of daily precipitation series, covering the period 1900-present. This brought us to the following definition, valid for the Low Countries: A period is considered as (very) dry respectively (very) wet if over a continuous period of at least 60 days (~two months) cq 90 days (~three months) on at least two out of the four locations 50% less resp. 50% more than the normal amount for the location (based on the 1961-1990 normal period) has been measured. This results into the following classification into five drought classes hat could be applied to non instrumental observations: Very wet period

  20. Systematic feasibility analysis of a quantitative elasticity estimation for breast anatomy using supine/prone patient postures.

    Science.gov (United States)

    Hasse, Katelyn; Neylon, John; Sheng, Ke; Santhanam, Anand P

    2016-03-01

    Breast elastography is a critical tool for improving the targeted radiotherapy treatment of breast tumors. Current breast radiotherapy imaging protocols only involve prone and supine CT scans. There is a lack of knowledge on the quantitative accuracy with which breast elasticity can be systematically measured using only prone and supine CT datasets. The purpose of this paper is to describe a quantitative elasticity estimation technique for breast anatomy using only these supine/prone patient postures. Using biomechanical, high-resolution breast geometry obtained from CT scans, a systematic assessment was performed in order to determine the feasibility of this methodology for clinically relevant elasticity distributions. A model-guided inverse analysis approach is presented in this paper. A graphics processing unit (GPU)-based linear elastic biomechanical model was employed as a forward model for the inverse analysis with the breast geometry in a prone position. The elasticity estimation was performed using a gradient-based iterative optimization scheme and a fast-simulated annealing (FSA) algorithm. Numerical studies were conducted to systematically analyze the feasibility of elasticity estimation. For simulating gravity-induced breast deformation, the breast geometry was anchored at its base, resembling the chest-wall/breast tissue interface. Ground-truth elasticity distributions were assigned to the model, representing tumor presence within breast tissue. Model geometry resolution was varied to estimate its influence on convergence of the system. A priori information was approximated and utilized to record the effect on time and accuracy of convergence. The role of the FSA process was also recorded. A novel error metric that combined elasticity and displacement error was used to quantify the systematic feasibility study. For the authors' purposes, convergence was set to be obtained when each voxel of tissue was within 1 mm of ground-truth deformation. The authors

  1. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits

    Directory of Open Access Journals (Sweden)

    Thomson Peter C

    2003-05-01

    Full Text Available Abstract To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits.

  2. Quantitative Estimation of Risks for Production Unit Based on OSHMS and Process Resilience

    Science.gov (United States)

    Nyambayar, D.; Koshijima, I.; Eguchi, H.

    2017-06-01

    Three principal elements in the production field of chemical/petrochemical industry are (i) Production Units, (ii) Production Plant Personnel and (iii) Production Support System (computer system introduced for improving productivity). Each principal element has production process resilience, i.e. a capability to restrain disruptive signals occurred in and out of the production field. In each principal element, risk assessment is indispensable for the production field. In a production facility, the occupational safety and health management system (Hereafter, referred to as OSHMS) has been introduced to reduce a risk of accidents and troubles that may occur during production. In OSHMS, a risk assessment is specified to reduce a potential risk in the production facility such as a factory, and PDCA activities are required for a continual improvement of safety production environments. However, there is no clear statement to adopt the OSHMS standard into the production field. This study introduces a metric to estimate the resilience of the production field by using the resilience generated by the production plant personnel and the result of the risk assessment in the production field. A method for evaluating how OSHMS functions are systematically installed in the production field is also discussed based on the resilience of the three principal elements.

  3. A scintillation camera technique for quantitative estimation of separate kidney function and its use before nephrectomy

    International Nuclear Information System (INIS)

    Larsson, I.; Lindstedt, E.; Ohlin, P.; Strand, S.E.; White, T.

    1975-01-01

    A scintillation camera technique was used for measuring renal uptake of [ 131 I]Hippuran 80-110 s after injection. Externally measured Hippuran uptake was markedly influenced by kidney depth, which was measured by lateral-view image after injection of [ 99 Tc]iron ascorbic acid complex or [ 197 Hg]chlormerodrine. When one kidney was nearer to the dorsal surface of the body than the other, it was necessary to correct the externally measured Hippuran uptake for kidney depth to obtain reliable information on the true partition of Hippuran between the two kidneys. In some patients the glomerular filtration rate (GFR) was measured before and after nephrectomy. Measured postoperative GFR was compared with preoperative predicted GFR, which was calculated by multiplying the preoperative Hippuran uptake of the kidney to be left in situ, as a fraction of the preoperative Hippuran uptake of both kidneys, by the measured preoperative GFR. The measured postoperative GFR was usually moderately higher than the preoperatively predicted GFR. The difference could be explained by a postoperative compensatory increase in function of the remaining kidney. Thus, the present method offers a possibility of estimating separate kidney function without arterial or ureteric catheterization. (auth)

  4. Usefulness of the automatic quantitative estimation tool for cerebral blood flow: clinical assessment of the application software tool AQCEL.

    Science.gov (United States)

    Momose, Mitsuhiro; Takaki, Akihiro; Matsushita, Tsuyoshi; Yanagisawa, Shin; Yano, Kesato; Miyasaka, Tadashi; Ogura, Yuka; Kadoya, Masumi

    2011-01-01

    AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25 ± 0.45 and that by the conventional method was 0.17 ± 0.39 (P = 0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83 ± 0.58 and that for the conventional method was 0.08 ± 0.29 (P = 0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation

  5. The Global Precipitation Climatology Project (GPCP) Combined Precipitation Dataset

    Science.gov (United States)

    Huffman, George J.; Adler, Robert F.; Arkin, Philip; Chang, Alfred; Ferraro, Ralph; Gruber, Arnold; Janowiak, John; McNab, Alan; Rudolf, Bruno; Schneider, Udo

    1997-01-01

    The Global Precipitation Climatology Project (GPCP) has released the GPCP Version 1 Combined Precipitation Data Set, a global, monthly precipitation dataset covering the period July 1987 through December 1995. The primary product in the dataset is a merged analysis incorporating precipitation estimates from low-orbit-satellite microwave data, geosynchronous-orbit -satellite infrared data, and rain gauge observations. The dataset also contains the individual input fields, a combination of the microwave and infrared satellite estimates, and error estimates for each field. The data are provided on 2.5 deg x 2.5 deg latitude-longitude global grids. Preliminary analyses show general agreement with prior studies of global precipitation and extends prior studies of El Nino-Southern Oscillation precipitation patterns. At the regional scale there are systematic differences with standard climatologies.

  6. Quantitative and qualitative estimates of cross-border tobacco shopping and tobacco smuggling in France.

    Science.gov (United States)

    Lakhdar, C Ben

    2008-02-01

    In France, cigarette sales have fallen sharply, especially in border areas, since the price increases of 2003 and 2004. It was proposed that these falls were not due to people quitting smoking but rather to increased cross-border sales of tobacco and/or smuggling. This paper aims to test this proposition. Three approaches have been used. First, cigarette sales data from French sources for the period 1999-2006 were collected, and a simulation of the changes seen within these sales was carried out in order to estimate what the sales situation would have looked like without the presence of foreign tobacco. Second, the statements regarding tobacco consumed reported by the French population with registered tobacco sales were compared. Finally, in order to identify the countries of origin of foreign tobacco entering France, we collected a random sample of cigarette packs from a waste collection centre. According to the first method, cross-border shopping and smuggling of tobacco accounted for 8635 tones of tobacco in 2004, 9934 in 2005, and 9930 in 2006, ie, between 14% and 17% of total sales. The second method gave larger results: the difference between registered cigarette sales and cigarettes declared as being smoked was around 12,000 to 13,000 tones in 2005, equivalent to 20% of legal sales. The collection of cigarette packs at a waste collection centre showed that foreign cigarettes accounted for 18.6% of our sample in 2005 and 15.5% in 2006. France seems mainly to be a victim of cross-border purchasing of tobacco products, with the contraband market for tobacco remaining modest. in order to avoid cross-border purchases, an increased harmonization of national policies on the taxation of tobacco products needs to be envisaged by the European Union.

  7. Precipitation and measurements of precipitation

    NARCIS (Netherlands)

    Schmidt, F.H.; Bruin, H.A.R. de; Attmannspacher, W.; Harrold, T.W.; Kraijenhoff van de Leur, D.A.

    1977-01-01

    In Western Europe, precipitation is normal phenomenon; it is of importance to all aspects of society, particularly to agriculture, in cattle breeding and, of course, it is a subject of hydrological research. Precipitation is an essential part in the hydrological cycle. How disastrous local

  8. Stereological estimates of nuclear volume and other quantitative variables in supratentorial brain tumors. Practical technique and use in prognostic evaluation

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Braendgaard, H; Chistiansen, A O

    1991-01-01

    The use of morphometry and modern stereology in malignancy grading of brain tumors is only poorly investigated. The aim of this study was to present these quantitative methods. A retrospective feasibility study of 46 patients with supratentorial brain tumors was carried out to demonstrate...... the practical technique. The continuous variables were correlated with the subjective, qualitative WHO classification of brain tumors, and the prognostic value of the parameters was assessed. Well differentiated astrocytomas (n = 14) had smaller estimates of the volume-weighted mean nuclear volume and mean...... nuclear profile area, than those of anaplastic astrocytomas (n = 13) (2p = 3.1.10(-3) and 2p = 4.8.10(-3), respectively). No differences were seen between the latter type of tumor and glioblastomas (n = 19). The nuclear index was of the same magnitude in all three tumor types, whereas the mitotic index...

  9. Agreement between clinical estimation and a new quantitative analysis by Photoshop software in fundus and angiographic image variables.

    Science.gov (United States)

    Ramezani, Alireza; Ahmadieh, Hamid; Azarmina, Mohsen; Soheilian, Masoud; Dehghan, Mohammad H; Mohebbi, Mohammad R

    2009-12-01

    To evaluate the validity of a new method for the quantitative analysis of fundus or angiographic images using Photoshop 7.0 (Adobe, USA) software by comparing with clinical evaluation. Four hundred and eighteen fundus and angiographic images of diabetic patients were evaluated by three retina specialists and then by computing using Photoshop 7.0 software. Four variables were selected for comparison: amount of hard exudates (HE) on color pictures, amount of HE on red-free pictures, severity of leakage, and the size of the foveal avascular zone (FAZ). The coefficient of agreement (Kappa) between the two methods in the amount of HE on color and red-free photographs were 85% (0.69) and 79% (0.59), respectively. The agreement for severity of leakage was 72% (0.46). In the two methods for the evaluation of the FAZ size using the magic and lasso software tools, the agreement was 54% (0.09) and 89% (0.77), respectively. Agreement in the estimation of the FAZ size by the lasso magnetic tool was excellent and was almost as good in the quantification of HE on color and on red-free images. Considering the agreement of this new technique for the measurement of variables in fundus images using Photoshop software with the clinical evaluation, this method seems to have sufficient validity to be used for the quantitative analysis of HE, leakage, and FAZ size on the angiograms of diabetic patients.

  10. Quantitative assessment of the microbial risk of leafy greens from farm to consumption: preliminary framework, data, and risk estimates.

    Science.gov (United States)

    Danyluk, Michelle D; Schaffner, Donald W

    2011-05-01

    This project was undertaken to relate what is known about the behavior of Escherichia coli O157:H7 under laboratory conditions and integrate this information to what is known regarding the 2006 E. coli O157:H7 spinach outbreak in the context of a quantitative microbial risk assessment. The risk model explicitly assumes that all contamination arises from exposure in the field. Extracted data, models, and user inputs were entered into an Excel spreadsheet, and the modeling software @RISK was used to perform Monte Carlo simulations. The model predicts that cut leafy greens that are temperature abused will support the growth of E. coli O157:H7, and populations of the organism may increase by as much a 1 log CFU/day under optimal temperature conditions. When the risk model used a starting level of -1 log CFU/g, with 0.1% of incoming servings contaminated, the predicted numbers of cells per serving were within the range of best available estimates of pathogen levels during the outbreak. The model predicts that levels in the field of -1 log CFU/g and 0.1% prevalence could have resulted in an outbreak approximately the size of the 2006 E. coli O157:H7 outbreak. This quantitative microbial risk assessment model represents a preliminary framework that identifies available data and provides initial risk estimates for pathogenic E. coli in leafy greens. Data gaps include retail storage times, correlations between storage time and temperature, determining the importance of E. coli O157:H7 in leafy greens lag time models, and validation of the importance of cross-contamination during the washing process.

  11. Precipitous Birth

    Directory of Open Access Journals (Sweden)

    Jennifer Yee

    2017-09-01

    Full Text Available Audience: This scenario was developed to educate emergency medicine residents on the management of a precipitous birth in the emergency department (ED. The case is also appropriate for teaching of medical students and advanced practice providers, as well as reviewing the principles of crisis resource management, teamwork, and communication. Introduction: Patients with precipitous birth require providers to manage two patients simultaneously with limited time and resources. Crisis resource management skills will be tested once baby is delivered, and the neonate will require assessment for potential neonatal resuscitation. Objectives: At the conclusion of the simulation session, learners will be able to manage women who have precipitous deliveries, as well as perform neonatal assessment and management. Method: This session was conducted using high-fidelity simulation, followed by a debriefing session and lecture on precipitous birth management and neonatal evaluation.

  12. TCA precipitation.

    Science.gov (United States)

    Koontz, Laura

    2014-01-01

    Trichloroacetic acid (TCA) precipitation of proteins is commonly used to concentrate protein samples or remove contaminants, including salts and detergents, prior to downstream applications such as SDS-PAGE or 2D-gels. TCA precipitation denatures the protein, so it should not be used if the protein must remain in its folded state (e.g., if you want to measure a biochemical activity of the protein). © 2014 Elsevier Inc. All rights reserved.

  13. STRONTIUM PRECIPITATION

    Science.gov (United States)

    McKenzie, T.R.

    1960-09-13

    A process is given for improving the precipitation of strontium from an aqueous phosphoric-acid-containing solution with nickel or cobalt ferrocyanide by simultaneously precipitating strontium or calcium phosphate. This is accomplished by adding to the ferrocyanide-containing solution calcium or strontium nitrate in a quantity to yield a concentration of from 0.004 to 0.03 and adjusting the pH of the solution to a value of above 8.

  14. An appraisal of precipitation distribution in the high-altitude catchments of the Indus basin.

    Science.gov (United States)

    Dahri, Zakir Hussain; Ludwig, Fulco; Moors, Eddy; Ahmad, Bashir; Khan, Asif; Kabat, Pavel

    2016-04-01

    Scarcity of in-situ observations coupled with high orographic influences has prevented a comprehensive assessment of precipitation distribution in the high-altitude catchments of Indus basin. Available data are generally fragmented and scattered with different organizations and mostly cover the valleys. Here, we combine most of the available station data with the indirect precipitation estimates at the accumulation zones of major glaciers to analyse altitudinal dependency of precipitation in the high-altitude Indus basin. The available observations signified the importance of orography in each sub-hydrological basin but could not infer an accurate distribution of precipitation with altitude. We used Kriging with External Drift (KED) interpolation scheme with elevation as a predictor to appraise spatiotemporal distribution of mean monthly, seasonal and annual precipitation for the period of 1998-2012. The KED-based annual precipitation estimates are verified by the corresponding basin-wide observed specific runoffs, which show good agreement. In contrast to earlier studies, our estimates reveal substantially higher precipitation in most of the sub-basins indicating two distinct rainfall maxima; 1st along southern and lower most slopes of Chenab, Jhelum, Indus main and Swat basins, and 2nd around north-west corner of Shyok basin in the central Karakoram. The study demonstrated that the selected gridded precipitation products covering this region are prone to significant errors. In terms of quantitative estimates, ERA-Interim is relatively close to the observations followed by WFDEI and TRMM, while APHRODITE gives highly underestimated precipitation estimates in the study area. Basin-wide seasonal and annual correction factors introduced for each gridded dataset can be useful for lumped hydrological modelling studies, while the estimated precipitation distribution can serve as a basis for bias correction of any gridded precipitation products for the study area

  15. Motor unit number estimation in the quantitative assessment of severity and progression of motor unit loss in Hirayama disease.

    Science.gov (United States)

    Zheng, Chaojun; Zhu, Yu; Zhu, Dongqing; Lu, Feizhou; Xia, Xinlei; Jiang, Jianyuan; Ma, Xiaosheng

    2017-06-01

    To investigate motor unit number estimation (MUNE) as a method to quantitatively evaluate severity and progression of motor unit loss in Hirayama disease (HD). Multipoint incremental MUNE was performed bilaterally on both abductor digiti minimi and abductor pollicis brevis muscles in 46 patients with HD and 32 controls, along with handgrip strength examination. MUNE was re-evaluated approximately 1year after initial examination in 17 patients with HD. The MUNE values were significantly lower in all the tested muscles in the HD group (Pdisease duration (Pmotor unit loss in patients with HD within approximately 1year (P4years. A reduction in the functioning motor units was found in patients with HD compared with that in controls, even in the early asymptomatic stages. Moreover, the motor unit loss in HD progresses gradually as the disease advances. These results have provided evidence for the application of MUNE in estimating the reduction of motor unit in HD and confirming the validity of MUNE for tracking the progression of HD in a clinical setting. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  16. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wenchao Zhang

    2016-05-01

    Full Text Available The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS, for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  17. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies.

    Science.gov (United States)

    Zhang, Wenchao; Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X

    2016-05-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the 'missing heritability,' which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.

  18. Estimating true human and animal host source contribution in quantitative microbial source tracking using the Monte Carlo method.

    Science.gov (United States)

    Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan

    2010-09-01

    Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and q

  19. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Haider, Masoom A. [Department of Medical Imaging, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5, Canada and Department of Medical Imaging, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Yeung, Ivan W. T. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Medical Physics, Stronach Regional Cancer Centre, Southlake Regional Health Centre, Newmarket, Ontario L3Y 2P9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2016-01-15

    quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.

  20. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies

    Science.gov (United States)

    Ali, E. S. M.; Spencer, B.; McEwen, M. R.; Rogers, D. W. O.

    2015-02-01

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990-6003) for 10-30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ˜0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1-2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1-22).

  1. Merging Satellite Precipitation Products for Improved Streamflow Simulations

    Science.gov (United States)

    Maggioni, V.; Massari, C.; Barbetta, S.; Camici, S.; Brocca, L.

    2017-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. Therefore, we propose to merge SM2RAIN and the widely used TMPA 3B42RT product across Italy for a 6-year period (2010-2015) at daily/0.25deg temporal/spatial scale. Two conceptually different merging techniques are compared to each other and evaluated in terms of different statistical metrics, including hit bias, threat score, false alarm rates, and missed rainfall volumes. The first is based on the maximization of the temporal correlation with a reference dataset, while the second is based on a Bayesian approach, which provides a probabilistic satellite precipitation estimate derived from the joint probability distribution of observations and satellite estimates. The merged precipitation products show a better performance with respect to the parental satellite-based products in terms of categorical

  2. Quantitative microbial risk assessment to estimate the health risk from exposure to noroviruses in polluted surface water in South Africa.

    Science.gov (United States)

    Van Abel, Nicole; Mans, Janet; Taylor, Maureen B

    2017-10-01

    This study assessed the risks posed by noroviruses (NoVs) in surface water used for drinking, domestic, and recreational purposes in South Africa (SA), using a quantitative microbial risk assessment (QMRA) methodology that took a probabilistic approach coupling an exposure assessment with four dose-response models to account for uncertainty. Water samples from three rivers were found to be contaminated with NoV GI (80-1,900 gc/L) and GII (420-9,760 gc/L) leading to risk estimates that were lower for GI than GII. The volume of water consumed and the probabilities of infection were lower for domestic (2.91 × 10 -8 to 5.19 × 10 -1 ) than drinking water exposures (1.04 × 10 -5 to 7.24 × 10 -1 ). The annual probabilities of illness varied depending on the type of recreational water exposure with boating (3.91 × 10 -6 to 5.43 × 10 -1 ) and swimming (6.20 × 10 -6 to 6.42 × 10 -1 ) being slightly greater than playing next to/in the river (5.30 × 10 -7 to 5.48 × 10 -1 ). The QMRA was sensitive to the choice of dose-response model. The risk of NoV infection or illness from contaminated surface water is extremely high in SA, especially for lower socioeconomic individuals, but is similar to reported risks from limited international studies.

  3. Avaliação de estimativas de campos de precipitação para modelagem hidrológica distribuída Assessment of estimated precipitation fields for distributed hydrologic modeling

    Directory of Open Access Journals (Sweden)

    Adriano Rolim da Paz

    2011-03-01

    Full Text Available É crescente a disponibilidade e utilização de campos de chuva estimados por sensoriamento remoto ou calculados por modelos de circulação da atmosfera, os quais são freqüentemente utilizados como entrada para modelos hidrológicos distribuídos. A distribuição espacial dos campos de chuva estimados é altamente relevante e deve ser avaliada frente aos campos de chuva observados. Este artigo propõe um método de comparação espaço-temporal entre campos de chuva observados e estimados baseado na comparação pixel a pixel e na construção de tabelas de contingência. Duas abordagens são utilizadas: (i a análise integrada no espaço gera índices de performance que retratam a qualidade do campo de chuva estimada em reproduzir a ocorrência de chuva observada ao longo do tempo; (ii a análise integrada no tempo produz mapas dos índices de performance que resumem a destreza das estimativas de ocorrência de chuva em cada pixel. Como exemplo de aplicação, é analisada a chuva estimada na climatologia do modelo global de circulação da atmosfera CPTEC/COLA sobre a bacia do Rio Grande. Utilizando-se cinco índices de performance, o método proposto permitiu identificar variações sazonais e padrões espaciais na performance das estimativas de chuva em relação a campos de chuva derivados de observações em pluviômetros.There is an increasing availability and application of precipitation fields estimated by remote sensing or calculated by atmospheric circulation models, which are frequently used as input for distributed hydrological models. The spatial distribution of the estimated precipitation fields is extremely important and must be verified against observed precipitation fields. This paper proposes a method for spatiotemporal comparison between observed and estimated precipitation fields based on a pixel by pixel comparison and on contingency tables. Two distinct approaches are carried out: (i the spatial integrated analysis

  4. Precipitation Matters

    Science.gov (United States)

    McDuffie, Thomas

    2007-01-01

    Although weather, including its role in the water cycle, is included in most elementary science programs, any further examination of raindrops and snowflakes is rare. Together rain and snow make up most of the precipitation that replenishes Earth's life-sustaining fresh water supply. When viewed individually, raindrops and snowflakes are quite…

  5. Seasonal to Interannual Variability of Satellite-Based Precipitation Estimates in the Pacific Ocean Associated with ENSO from 1998 to 2014

    Directory of Open Access Journals (Sweden)

    Xueyan Hou

    2016-10-01

    Full Text Available Based on a widely used satellite precipitation product (TRMM Multi-satellite Precipitation Analysis 3B43, we analyzed the spatiotemporal variability of precipitation over the Pacific Ocean for 1998–2014 at seasonal and interannual timescales, separately, using the conventional empirical orthogonal function (EOF and investigated the seasonal patterns associated with El Niño–Southern Oscillation (ENSO cycles using season-reliant empirical orthogonal function (SEOF analysis. Lagged correlation analysis was also applied to derive the lead/lag correlations of the first two SEOF modes for precipitation with Pacific Decadal Oscillation (PDO and two types of El Niño, i.e., central Pacific (CP El Niño and eastern Pacific (EP El Niño. We found that: (1 The first two seasonal EOF modes for precipitation represent the annual cycle of precipitation variations for the Pacific Ocean and the first interannual EOF mode shows the spatiotemporal variability associated with ENSO; (2 The first SEOF mode for precipitation is simultaneously associated with the development of El Niño and most likely coincides with CP El Niño. The second SEOF mode lagged behind ENSO by one year and is associated with post-El Niño years. PDO modulates precipitation variability significantly only when ENSO occurs by strengthening and prolonging the impacts of ENSO; (3 Seasonally evolving patterns of the first two SEOF modes represent the consecutive precipitation patterns associated with the entire development of EP El Niño and the following recovery year. The most significant variation occurs over the tropical Pacific, especially in the Intertropical Convergence Zone (ITCZ and South Pacific Convergence Zone (SPCZ; (4 Dry conditions in the western basin of the warm pool and wet conditions along the ITCZ and SPCZ bands during the mature phase of El Niño are associated with warm sea surface temperatures in the central tropical Pacific, and a subtropical anticyclone dominating

  6. Rapid non-destructive quantitative estimation of urania/ thoria in mixed thorium uranium di-oxide pellets by high-resolution gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Shriwastwa, B.B.; Kumar, Anil; Raghunath, B.; Nair, M.R.; Abani, M.C.; Ramachandran, R.; Majumdar, S.; Ghosh, J.K.

    2001-01-01

    A non-destructive technique using high-resolution gamma-ray spectrometry has been standardised for quantitative estimation of uranium/thorium in mixed (ThO 2 -UO 2 ) fuel pellets of varying composition. Four gamma energies were selected; two each from the uranium and thorium series and the time of counting has been optimised. This technique can be used for rapid estimation of U/Th percentage in a large number of mixed fuel pellets from a production campaign

  7. Bio-precipitation of uranium by two bacterial isolates recovered from extreme environments as estimated by potentiometric titration, TEM and X-ray absorption spectroscopic analyses.

    Science.gov (United States)

    Merroun, Mohamed L; Nedelkova, Marta; Ojeda, Jesus J; Reitz, Thomas; Fernández, Margarita López; Arias, José M; Romero-González, María; Selenska-Pobell, Sonja

    2011-12-15

    This work describes the mechanisms of uranium biomineralization at acidic conditions by Bacillus sphaericus JG-7B and Sphingomonas sp. S15-S1 both recovered from extreme environments. The U-bacterial interaction experiments were performed at low pH values (2.0-4.5) where the uranium aqueous speciation is dominated by highly mobile uranyl ions. X-ray absorption spectroscopy (XAS) showed that the cells of the studied strains precipitated uranium at pH 3.0 and 4.5 as a uranium phosphate mineral phase belonging to the meta-autunite group. Transmission electron microscopic (TEM) analyses showed strain-specific localization of the uranium precipitates. In the case of B. sphaericus JG-7B, the U(VI) precipitate was bound to the cell wall. Whereas for Sphingomonas sp. S15-S1, the U(VI) precipitates were observed both on the cell surface and intracellularly. The observed U(VI) biomineralization was associated with the activity of indigenous acid phosphatase detected at these pH values in the absence of an organic phosphate substrate. The biomineralization of uranium was not observed at pH 2.0, and U(VI) formed complexes with organophosphate ligands from the cells. This study increases the number of bacterial strains that have been demonstrated to precipitate uranium phosphates at acidic conditions via the activity of acid phosphatase. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Intra-rater reliability of motor unit number estimation and quantitative motor unit analysis in subjects with amyotrophic lateral sclerosis.

    Science.gov (United States)

    Ives, Colleen T; Doherty, Timothy J

    2014-01-01

    To assess the intra-rater reliability of decomposition-enhanced spike-triggered averaging (DE-STA) motor unit number estimation (MUNE) and quantitative motor unit potential analysis in the upper trapezius (UT) and biceps brachii (BB) of subjects with amyotrophic lateral sclerosis (ALS) and to compare the results from the UT to control data. Patients diagnosed with clinically probable or definite ALS completed the experimental protocol twice with the same evaluator for the UT (n=10) and BB (n=9). Intra-rater reliability for the UT was good for the maximum compound muscle action potential (CMAP) (ICC=0.88), mean surface-detected motor unit potential (S-MUP) (ICC=0.87) and MUNE (ICC=0.88), and for the BB was moderate for maximum CMAP (ICC=0.61), and excellent for mean S-MUP (ICC=0.94) and MUNE (ICC=0.93). A significant difference between tests was found for UT MUNE. Comparing subjects with ALS to control subjects, UT maximum CMAP (p<0.01) and MUNE (p<0.001) values were significantly lower, and mean S-MUP values significantly greater (p<0.05) in subjects with ALS. This study has demonstrated the ability of the DE-STA MUNE technique to collect highly reliable data from two separate muscle groups and to detect the underlying pathophysiology of the disease. This was the first study to examine the reliability of this technique in subjects with ALS, and demonstrates its potential for future use as an outcome measure in ALS clinical trials and studies of ALS disease severity and natural history. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Data Analysis of GPM Constellation Satellites-IMERG and ERA-Interim precipitation products over West of Iran

    Science.gov (United States)

    Sharifi, Ehsan; Steinacker, Reinhold; Saghafian, Bahram

    2016-04-01

    Precipitation is a critical component of the Earth's hydrological cycle. The primary requirement in precipitation measurement is to know where and how much precipitation is falling at any given time. Especially in data sparse regions with insufficient radar coverage, satellite information can provide a spatial and temporal context. Nonetheless, evaluation of satellite precipitation is essential prior to operational use. This is why many previous studies are devoted to the validation of satellite estimation. Accurate quantitative precipitation estimation over mountainous basins is of great importance because of their susceptibility to hazards. In situ observations over mountainous areas are mostly limited, but currently available satellite precipitation products can potentially provide the precipitation estimation needed for meteorological and hydrological applications. One of the newest and blended methods that use multi-satellites and multi-sensors has been developed for estimating global precipitation. The considered data set known as Integrated Multi-satellitE Retrievals (IMERG) for GPM (Global Precipitation Measurement) is routinely produced by the GPM constellation satellites. Moreover, recent efforts have been put into the improvement of the precipitation products derived from reanalysis systems, which has led to significant progress. One of the best and a worldwide used model is developed by the European Centre for Medium Range Weather Forecasts (ECMWF). They have produced global reanalysis daily precipitation, known as ERA-Interim. This study has evaluated one year of precipitation data from the GPM-IMERG and ERA-Interim reanalysis daily time series over West of Iran. IMERG and ERA-Interim yield underestimate the observed values while IMERG underestimated slightly and performed better when precipitation is greater than 10mm. Furthermore, with respect to evaluation of probability of detection (POD), threat score (TS), false alarm ratio (FAR) and probability

  10. Bio-precipitation of uranium by two bacterial isolates recovered from extreme environments as estimated by potentiometric titration, TEM and X-ray absorption spectroscopic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Merroun, Mohamed L., E-mail: merroun@ugr.es [Institute of Radiochemistry, Helmholtz Centre Dresden-Rossendorf, Dresden (Germany); Departamento de Microbiologia, Universidad de Granada, Campus Fuentenueva s/n 18071, Granada (Spain); Nedelkova, Marta [Institute of Radiochemistry, Helmholtz Centre Dresden-Rossendorf, Dresden (Germany); Ojeda, Jesus J. [Cell-Mineral Interface Research Programme, Kroto Research Institute, University of Sheffield, Broad Lane, Sheffield S3 7HQ (United Kingdom); Experimental Techniques Centre, Brunel University, Uxbridge, Middlesex UB8 3PH (United Kingdom); Reitz, Thomas [Institute of Radiochemistry, Helmholtz Centre Dresden-Rossendorf, Dresden (Germany); Fernandez, Margarita Lopez; Arias, Jose M. [Departamento de Microbiologia, Universidad de Granada, Campus Fuentenueva s/n 18071, Granada (Spain); Romero-Gonzalez, Maria [Cell-Mineral Interface Research Programme, Kroto Research Institute, University of Sheffield, Broad Lane, Sheffield S3 7HQ (United Kingdom); Selenska-Pobell, Sonja [Institute of Radiochemistry, Helmholtz Centre Dresden-Rossendorf, Dresden (Germany)

    2011-12-15

    Highlights: Black-Right-Pointing-Pointer Precipitation of uranium as U phosphates by natural bacterial isolates. Black-Right-Pointing-Pointer The uranium biomineralization involves the activity of acidic phosphatase. Black-Right-Pointing-Pointer Uranium bioremediation could be achieved via the biomineralization of U(VI) in phosphate minerals. - Abstract: This work describes the mechanisms of uranium biomineralization at acidic conditions by Bacillus sphaericus JG-7B and Sphingomonas sp. S15-S1 both recovered from extreme environments. The U-bacterial interaction experiments were performed at low pH values (2.0-4.5) where the uranium aqueous speciation is dominated by highly mobile uranyl ions. X-ray absorption spectroscopy (XAS) showed that the cells of the studied strains precipitated uranium at pH 3.0 and 4.5 as a uranium phosphate mineral phase belonging to the meta-autunite group. Transmission electron microscopic (TEM) analyses showed strain-specific localization of the uranium precipitates. In the case of B. sphaericus JG-7B, the U(VI) precipitate was bound to the cell wall. Whereas for Sphingomonas sp. S15-S1, the U(VI) precipitates were observed both on the cell surface and intracellularly. The observed U(VI) biomineralization was associated with the activity of indigenous acid phosphatase detected at these pH values in the absence of an organic phosphate substrate. The biomineralization of uranium was not observed at pH 2.0, and U(VI) formed complexes with organophosphate ligands from the cells. This study increases the number of bacterial strains that have been demonstrated to precipitate uranium phosphates at acidic conditions via the activity of acid phosphatase.

  11. The Possibility of Making a Quantitative Study of the Precipitin Reaction by Gamma-Radioactive Tracers; Possibilite d'une Etude Quantitative de la Reaction de Precipitation par Marquage a l'Aide d'Emetteurs Gamma; Vozmozhnost' kolichestvennogo opredeleniya reaktsii osazhdeniya s pomoshch'yu gamma-radioaktivnykh indikatorov; Estudio Cuantitativo de la Reaccion de Precipitacion con Ayuda de Indicadores Gamma

    Energy Technology Data Exchange (ETDEWEB)

    Bonev, L.; Todorov, S.; Robev, S. [Nauchno-Issledovatel' skij Institut Radiologii i Radiacionnoj Bezopasnosti, Sofija (Bulgaria)

    1965-10-15

    The paper presents the first results of the quantitative determination of the precipitin reaction (formation of an antigen-antibody complex) by labelling the precipitating components with gamma-radioactive tracers which do not chemically interact with albuminous molecules. As tracers it is possible to use chrome-manganese and nickel-copper compounds, whose tendency to hydrolyze permits the fixation of the radioactive tracer on the antigen and antibody respectively. The radioactivity of the components is determined by a multichannel pulse-height analyser. The results obtained show that the precipitation curve, plotted on the basis of radiometric data, closely corresponds to the curve plotted by the well-known quantitative methods used to determine albumin. The paper discusses the possibilities of using the method described. (author) [French] Les auteurs communiquent les premiers resultats d'une etude quantitative qu'ils ont faite de la reaction de precipitation (formation du complexe antigene-anticorps) en marquant les composants du precipites avec des emetteurs gamma qui ne reagissent pas chimiquement avec les molecules d'albumine. Pour le marquage, on peut utiliser des composes de chrome/manganese ou de fer qui subissent facilement l'hydrolyse et assurent de ce fait la fixation de la substance radioactive sur l'antigene et, partant, sur l'anticorps. La radioactivite des composants est determinee a l'aide d'un selecteur d'amplitudes a plusieurs canaux. Les resultats montrent que la courbe de precipitation obtenue avec les donnees radiometriques concorde parfaitement avec celle que l'on obtient par les methodes quantitatives connues servant au dosage de l'albumine. Les auteurs discutent les possibilites d'application de la methode etudiee. (author) [Spanish] Los autores presentan los primeros resultados de un estudio cuantitativo de la reaccion de precipitacion (formacion del complejo antigeno-anticuerpo) que efectuaron marcando los componentes de precipitacion con

  12. Utilizing the Vertical Variability of Precipitation to Improve Radar QPE

    Science.gov (United States)

    Gatlin, Patrick N.; Petersen, Walter A.

    2016-01-01

    Characteristics of the melting layer and raindrop size distribution can be exploited to further improve radar quantitative precipitation estimation (QPE). Using dual-polarimetric radar and disdrometers, we found that the characteristic size of raindrops reaching the ground in stratiform precipitation often varies linearly with the depth of the melting layer. As a result, a radar rainfall estimator was formulated using D(sub m) that can be employed by polarimetric as well as dual-frequency radars (e.g., space-based radars such as the GPM DPR), to lower the bias and uncertainty of conventional single radar parameter rainfall estimates by as much as 20%. Polarimetric radar also suffers from issues associated with sampling the vertical distribution of precipitation. Hence, we characterized the vertical profile of polarimetric parameters (VP3)-a radar manifestation of the evolving size and shape of hydrometeors as they fall to the ground-on dual-polarimetric rainfall estimation. The VP3 revealed that the profile of ZDR in stratiform rainfall can bias dual-polarimetric rainfall estimators by as much as 50%, even after correction for the vertical profile of reflectivity (VPR). The VP3 correction technique that we developed can improve operational dual-polarimetric rainfall estimates by 13% beyond that offered by a VPR correction alone.

  13. Use of the tritium thermonuclear peak in the deep unsaturated zone for quantitative estimate of aquifer recharge under semi-arid conditions: first application in Sahel

    International Nuclear Information System (INIS)

    Cheikh Becaye Gaye; Aranyossy, J.F.

    1992-01-01

    The location of the bomb tritium signal at 20 and 12 m depth in the unsaturated sand dunes in the semi-arid part of North Senegal leads to a qualitative estimate of the effective infiltration of 22 and 26 mm.yr -1 . These figures correspond respectively to 6.5 and 8% of the total precipitation since 1963. Tritium content distribution in interstitial water is modelled by convolution of the analytical solution of the dispersion equation. Best fitting of the complete 12 m depth tritium peak is obtained with a dispersion coefficient of 0.03 m 2 .yr -1

  14. Quantitative Estimation of Coastal Changes along Selected Locations of Karnataka, India: A GIS and Remote Sensing Approach

    Digital Repository Service at National Institute of Oceanography (India)

    Vinayaraj, P.; Johnson, G.; Dora, G.U.; Philip, C.S.; SanilKumar, V.; Gowthaman, R.

    Qualitative and quantitative studies on changes of coastal geomorphology and shoreline of Karnataka, India have been carried out using toposheets of Survey of India and satellite imageries (IRS-P6 and IRS-1D). Changes during 30 years period...

  15. Towards a Quantitative Use of Satellite Remote Sensing in Crop Growth Models for Large Scale Agricultural Production Estimate (Invited)

    Science.gov (United States)

    Defourny, P.

    2013-12-01

    such the Green Area Index (GAI), fAPAR and fcover usually retrieved from MODIS, MERIS, SPOT-Vegetation described the quality of the green vegetation development. The GLOBAM (Belgium) and EU FP-7 MOCCCASIN projects (Russia) improved the standard products and were demonstrated over large scale. The GAI retrieved from MODIS time series using a purity index criterion depicted successfully the inter-annual variability. Furthermore, the quantitative assimilation of these GAI time series into a crop growth model improved the yield estimate over years. These results showed that the GAI assimilation works best at the district or provincial level. In the context of the GEO Ag., the Joint Experiment of Crop Assessment and Monitoring (JECAM) was designed to enable the global agricultural monitoring community to compare such methods and results over a variety of regional cropping systems. For a network of test sites around the world, satellite and field measurements are currently collected and will be made available for collaborative effort. This experiment should facilitate international standards for data products and reporting, eventually supporting the development of a global system of systems for agricultural crop assessment and monitoring.

  16. Next-Generation Satellite Precipitation Products for Understanding Global and Regional Water Variability

    Science.gov (United States)

    Hou, Arthur Y.

    2011-01-01

    A major challenge in understanding the space-time variability of continental water fluxes is the lack of accurate precipitation estimates over complex terrains. While satellite precipitation observations can be used to complement ground-based data to obtain improved estimates, space-based and ground-based estimates come with their own sets of uncertainties, which must be understood and characterized. Quantitative estimation of uncertainties in these products also provides a necessary foundation for merging satellite and ground-based precipitation measurements within a rigorous statistical framework. Global Precipitation Measurement (GPM) is an international satellite mission that will provide next-generation global precipitation data products for research and applications. It consists of a constellation of microwave sensors provided by NASA, JAXA, CNES, ISRO, EUMETSAT, DOD, NOAA, NPP, and JPSS. At the heart of the mission is the GPM Core Observatory provided by NASA and JAXA to be launched in 2013. The GPM Core, which will carry the first space-borne dual-frequency radar and a state-of-the-art multi-frequency radiometer, is designed to set new reference standards for precipitation measurements from space, which can then be used to unify and refine precipitation retrievals from all constellation sensors. The next-generation constellation-based satellite precipitation estimates will be characterized by intercalibrated radiometric measurements and physical-based retrievals using a common observation-derived hydrometeor database. For pre-launch algorithm development and post-launch product evaluation, NASA supports an extensive ground validation (GV) program in cooperation with domestic and international partners to improve (1) physics of remote-sensing algorithms through a series of focused field campaigns, (2) characterization of uncertainties in satellite and ground-based precipitation products over selected GV testbeds, and (3) modeling of atmospheric processes and

  17. Comparison of region-of-influence methods for estimating high quantiles of precipitation in a dense dataset in the Czech Republic

    Czech Academy of Sciences Publication Activity Database

    Gaál, Ladislav; Kyselý, Jan

    2009-01-01

    Roč. 13, č. 11 (2009), s. 2203-2219 ISSN 1027-5606 R&D Projects: GA AV ČR KJB300420801 Institutional research plan: CEZ:AV0Z30420517 Keywords : heavy precipitation * extreme value analysis * region-of-influence method * central Europe Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.462, year: 2009 http://www.hydrol-earth-syst-sci.net/13/2203/2009/

  18. Probability estimates of heavy precipitation events in a flood-prone central-European region with enhanced influence of Mediterranean cyclones

    Czech Academy of Sciences Publication Activity Database

    Kyselý, Jan; Picek, J.

    2007-01-01

    Roč. 12, - (2007), s. 43-50 ISSN 1680-7340 R&D Projects: GA AV ČR KJB300420601 Institutional research plan: CEZ:AV0Z30420517 Keywords : extreme precipitation event * region al frequency analysis * Generalized Extreme Value distribution * Generalized Logistic distribution * central Europe * Czech Republic Subject RIV: DG - Athmosphere Sciences, Meteorology www.adv-geosci.net/12/43/2007/

  19. The effects of dominance, regular inbreeding and sampling design on Q(ST), an estimator of population differentiation for quantitative traits.

    Science.gov (United States)

    Goudet, Jérôme; Büchi, Lucie

    2006-02-01

    To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.

  20. Rapid non-destructive quantitative estimation of urania/ thoria in mixed thorium uranium di-oxide pellets by high-resolution gamma-ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Shriwastwa, B.B.; Kumar, Anil; Raghunath, B.; Nair, M.R.; Abani, M.C.; Ramachandran, R.; Majumdar, S.; Ghosh, J.K

    2001-06-01

    A non-destructive technique using high-resolution gamma-ray spectrometry has been standardised for quantitative estimation of uranium/thorium in mixed (ThO{sub 2}-UO{sub 2}) fuel pellets of varying composition. Four gamma energies were selected; two each from the uranium and thorium series and the time of counting has been optimised. This technique can be used for rapid estimation of U/Th percentage in a large number of mixed fuel pellets from a production campaign.

  1. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography

    Directory of Open Access Journals (Sweden)

    Manouras Aristomenis

    2009-08-01

    Full Text Available Abstract Background Visual assessment of left ventricular ejection fraction (LVEF is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE and triplane echocardiography (TPE using quantitative real-time three-dimensional echocardiography (RT3DE as the reference method. Methods Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. Results There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively without any significant bias (-0.5 ± 3.7% and -0.2 ± 2.9% respectively. Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Conclusion Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.

  2. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography.

    Science.gov (United States)

    Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Ake; Winter, Reidar

    2009-08-25

    Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 +/- 3.7% and -0.2 +/- 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.

  3. Climate Prediction Center(CPC)Daily GOES Precipitation Index (GPI)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — GOES Precipitation Index (GPI) is a precipitation estimation algorithm. The GPI technique estimates tropical rainfall using cloud-top temperature as the sole...

  4. Validation of Satellite Precipitation (trmm 3B43) in Ecuadorian Coastal Plains, Andean Highlands and Amazonian Rainforest

    Science.gov (United States)

    Ballari, D.; Castro, E.; Campozano, L.

    2016-06-01

    Precipitation monitoring is of utmost importance for water resource management. However, in regions of complex terrain such as Ecuador, the high spatio-temporal precipitation variability and the scarcity of rain gauges, make difficult to obtain accurate estimations of precipitation. Remotely sensed estimated precipitation, such as the Multi-satellite Precipitation Analysis TRMM, can cope with this problem after a validation process, which must be representative in space and time. In this work we validate monthly estimates from TRMM 3B43 satellite precipitation (0.25° x 0.25° resolution), by using ground data from 14 rain gauges in Ecuador. The stations are located in the 3 most differentiated regions of the country: the Pacific coastal plains, the Andean highlands, and the Amazon rainforest. Time series, between 1998 - 2010, of imagery and rain gauges were compared using statistical error metrics such as bias, root mean square error, and Pearson correlation; and with detection indexes such as probability of detection, equitable threat score, false alarm rate and frequency bias index. The results showed that precipitation seasonality is well represented and TRMM 3B43 acceptably estimates the monthly precipitation in the three regions of the country. According to both, statistical error metrics and detection indexes, the coastal and Amazon regions are better estimated quantitatively than the Andean highlands. Additionally, it was found that there are better estimations for light precipitation rates. The present validation of TRMM 3B43 provides important results to support further studies on calibration and bias correction of precipitation in ungagged watershed basins.

  5. Quantitative estimation of the influence of external vibrations on the measurement error of a coriolis mass-flow meter

    NARCIS (Netherlands)

    van de Ridder, Bert; Hakvoort, Wouter; van Dijk, Johannes; Lötters, Joost Conrad; de Boer, Andries; Dimitrovova, Z.; de Almeida, J.R.

    2013-01-01

    In this paper the quantitative influence of external vibrations on the measurement value of a Coriolis Mass-Flow Meter for low flows is investigated, with the eventual goal to reduce the influence of vibrations. Model results are compared with experimental results to improve the knowledge on how

  6. Quantitative coronary angiography in the estimation of the functional significance of coronary stenosis: correlations with dobutamine-atropine stress test

    NARCIS (Netherlands)

    J.M.P. Baptista da Silva (José); M. Arnese (Mariarosaria); J.R.T.C. Roelandt (Jos); P.M. Fioretti (Paolo); D.T.J. Keane (David); J. Escaned (Javier); C. di Mario (Carlo); P.W.J.C. Serruys (Patrick); H. Boersma (Eric)

    1994-01-01

    textabstractOBJECTIVES. The purpose of this study was to determine the predictive value of quantitative coronary angiography in the assessment of the functional significance of coronary stenosis as judged from the development of left ventricular wall motion abnormalities during dobutamine-atropine

  7. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  8. Understanding SMAP-L4 soil moisture estimation skill and their dependence with topography, precipitation and vegetation type using Mesonet and Micronet networks.

    Science.gov (United States)

    Moreno, H. A.; Basara, J. B.; Thompson, E.; Bertrand, D.; Johnston, C. S.

    2017-12-01

    Soil moisture measurements using satellite information can benefit from a land data assimilation model Goddard Earth Observing System (GEOS-5) and land data assimilation system (LDAS) to improve the representation of fine-scale dynamics and variability. This work presents some advances to understand the predictive skill of L4-SM product across different land-cover types, topography and precipitation totals, by using a dense network of multi-level soil moisture sensors (i.e. Mesonet and Micronet) in Oklahoma. 130 soil moisture stations are used across different precipitation gradients (i.e. arid vs wet), land cover (e.g. forest, shrubland, grasses, crops), elevation (low, mid and high) and slope to assess the improvements by the L4_SM product relative to the raw SMAP L-band brightness temperatures. The comparisons are conducted between July 2015 and July 2016 at the daily time scale. Results show the highest L4-SM overestimations occur in pastures and cultivated crops, during the rainy season and at higher elevation lands (over 800 meters asl). The smallest errors occur in low elevation lands, low rainfall and developed lands. Forested area's soil moisture biases lie in between pastures (max biases) and low intensity/developed lands (min biases). Fine scale assessment of L4-SM should help GEOS-5 and LDAS teams refine model parameters in light of observed differences and improve assimilation techniques in light of land-cover, topography and precipitation regime. Additionally, regional decision makers could have a framework to weight the utility of this product for water resources applications.

  9. Investigation of Asphaltene Precipitation at Elevated Temperature

    DEFF Research Database (Denmark)

    Andersen, Simon Ivar; Lindeloff, Niels; Stenby, Erling Halfdan

    1998-01-01

    In order to obtain quantitative data on the asphaltene precipitation induced by the addition of n-alkane (heptane) at temperatures above the normal boiling point of the precipitant, a high temperature/high pressure filtration apparatus has been constructed. Oil and alkane are mixed...

  10. Atmospheric balance of the humidity and estimate of the precipitation recycled in Colombia according to the re-analysis NCEP/NCAR

    International Nuclear Information System (INIS)

    Cuartas, Adriana; Poveda, German

    2002-01-01

    The magnitudes of the entrance humidity flows and exit are considered and the amount of precipitable water at different levels from the atmospheric column on Colombia. The water balance is quantified in the Colombian atmosphere; the regions and the atmospheric levels of entrance and exit of humidity are identified. The hypothesis that in the long term the net atmospheric humidity influence must be equal to the average of long term of the net run-off is verified. In addition, the percentage of recycled precipitation is considered on the Colombian territory. The variability during the two phases of the ENSO is analyzed. The calculations are made with the information of the climatic project Reanalysis developed by the National Center for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR), with the collaboration of the National Oceanic and Atmospheric Administration (NOAA)/National Environmental Satellite of the U.S.A. For this work it was counted on monthly information of 41 years between 1958-1998. The hydrological information was obtained from the project Balances Hidrologicos de Colombia, 1999, made by the Posgrado de Recursos Hidraulicos, de la Universidad Nacional, with the support of COLCIENCIAS and the Unidad de Planeacion Minero Energetica-UPME. The results showed the average value of the net influence of humidity to the atmosphere of Colombia is of 5716 mm/year, with a great variability in both phases of the ENSO. The greater humidity advection towards Colombia occurs in the low levels of pressure (between 1000 and 850 hPa), and originating of all the directions, mainly of trade winds of the east and trade winds of the west. Also one was that the greater humidity transport towards Colombia occurs in trimesters DJF and MAM, with average values 505,1 and 606,6 mm/year, respectively. It was observed that the hypothesis that in the long term, the net atmospheric flux, is equal to the net terrestrial run-off, reasonably is adapted for

  11. Assessing the importance of spatio-temporal RCM resolution when estimating sub-daily extreme precipitation under current and future climate conditions

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Luchner, J.; Onof, C.

    2017-01-01

    extreme precipitation over Denmark generated by the regional climate model (RCM) HIRHAM-ECEARTH at different spatial resolutions (8, 12, 25 and 50km), three RCM from the RiskChange project at 8km resolution and three RCMs from ENSEMBLES at 25km resolution at temporal aggregations from 1 to 48h...... are more skewed than the observational dataset, which leads to an overestimation by the higher spatial resolution simulations. Nevertheless, in general, under current conditions RCM simulations at high spatial resolution represent extreme events and high-order moments better. The changes projected...

  12. Use of Thermal Data to Estimate Infiltration in Pagany Wash Associated with the winter of 1997-1998 El Nino Precipitation, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    LeCain, G.D.; Lu, N.; Kurzmack, M.

    2000-01-01

    Temperature and air-pressure monitoring in a vertical borehole located in Pagany Wash, a normally dry stream-carved channel northeast of Yucca Mountain, Nevada, indicated that the annual temperature wave was measurable to a depth of 11.1 m. Temperature depressions were measured at depths of 3.1, 6.1, 9.2, and 11.1 m below ground surface. The temperature depressions were interpreted to be the result of infiltration associated with the 1997-1998 El Nino precipitation. A pressure differential, of approximately 2 kiloPascals, between stations located 11.1 and 24.5 m below ground surface was interpreted to be the result of compressed air ahead of the wetting front. The pressure differences between stations indicated that the wetting front migrated deeper than 35.2 m and that the Yucca Mountain Tuff retarded the downward movement of the wetting front. An analytical method indicated that the infiltration flux through the Pagany Wash alluvium due to the 1997-1998 El Nino precipitation was approximately 940 mm. A one-dimensional numerical model indicated that the infiltration flux was approximately 1000 mm. Sensitivity analysis indicated that the potential temperature decrease due to conduction was minimal and that cooler surface temperatures could not account for the measured subsurface temperature depressions

  13. A quantitative magnetic resonance histology atlas of postnatal rat brain development with regional estimates of growth and variability.

    Science.gov (United States)

    Calabrese, Evan; Badea, Alexandra; Watson, Charles; Johnson, G Allan

    2013-05-01

    There has been growing interest in the role of postnatal brain development in the etiology of several neurologic diseases. The rat has long been recognized as a powerful model system for studying neuropathology and the safety of pharmacologic treatments. However, the complex spatiotemporal changes that occur during rat neurodevelopment remain to be elucidated. This work establishes the first magnetic resonance histology (MRH) atlas of the developing rat brain, with an emphasis on quantitation. The atlas comprises five specimens at each of nine time points, imaged with eight distinct MR contrasts and segmented into 26 developmentally defined brain regions. The atlas was used to establish a timeline of morphometric changes and variability throughout neurodevelopment and represents a quantitative database of rat neurodevelopment for characterizing rat models of human neurologic disease. Published by Elsevier Inc.

  14. Quantitative estimation of viable myocardium in the infarcted zone by infarct-redistribution map from images of exercise thallium-201 emission computed tomography

    International Nuclear Information System (INIS)

    Sekiai, Yasuhiro

    1988-01-01

    To evaluate, quantitatively, the viable myocardium in the infarcted zone, we invented the infarct-redistribution map which is produced from images of exercise thallium-201 emission computed tomography performed on 10 healthy subjects and 20 patients with myocardial infarction. The map displayed a left ventricle in which the infarcted area both with and without redistribution, the redistribution area without infarction, and normal perfusion area were shown separated in same screen. In these circumstances, the nonredistribution infarct lesion was found as being surrounded by the redistribution area. Indices of infarct and redistribution extent (defect score, % defect, redistribution ratio (RR) and redistribution index (RI)), were induced from the map and were used for quantitative analysis of the redistribution area and as the basis for comparative discussion regarding regional wall motion of the left ventricle. The quantitative indices of defect score, % defect, RR and RI were consistent with the visual assessment of planar images in detecting the extent of redistribution. Furthermore, defect score and % defect had an inverted linear relationship with % shortening (r = -0.573; p < 0.05, r = -0.536; p < 0.05, respectively), and RI had a good linear relationship with % shortening (r = 0.669; p < 0.01). We conclude that the infarct-redistribution map accurately reflects the myocardial viability and therefore may be useful for quantitative estimation of viable myocardium in the infarcted zone. (author)

  15. Spatio-Temporal Analysis of the Accuracy of Tropical Multisatellite Precipitation Analysis 3B42 Precipitation Data in Mid-High Latitudes of China

    Science.gov (United States)

    Cai, Yancong; Jin, Changjie; Wang, Anzhi; Guan, Dexin; Wu, Jiabing; Yuan, Fenghui; Xu, Leilei

    2015-01-01

    Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS). This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998–2012. Comparative analysis at three timescales (daily, monthly and annual scale) indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%). Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these data are

  16. Spatio-temporal analysis of the accuracy of tropical multisatellite precipitation analysis 3B42 precipitation data in mid-high latitudes of China.

    Directory of Open Access Journals (Sweden)

    Yancong Cai

    Full Text Available Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM Multisatellite Precipitation Analysis (TMPA 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS. This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998-2012. Comparative analysis at three timescales (daily, monthly and annual scale indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%. Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these

  17. An approach to estimate the freshwater contribution from glacial melt and precipitation in East Greenland shelf waters using colored dissolved organic matter (CDOM)

    DEFF Research Database (Denmark)

    Stedmon, Colin; Granskog, Mats A.; Dodd, Paul A.

    2015-01-01

    Changes in the supply and storage of freshwater in the Arctic Ocean and its subsequent export to the North Atlantic can potentially influence ocean circulation and climate. In order to understand how the Arctic freshwater budget is changing and the potential impacts, it is important to develop......, and precipitation) and sea ice melt. We develop this approach further and investigate the use of an additional tracer, colored dissolved organic matter (CDOM), which is largely specific to freshwater originating from Arctic rivers. A robust relationship between the freshwater contribution from meteoric water...... processes (riverine input and sea ice formation), while previously, these waters where thought to be derived from open sea processes (cooling and sea ice formation) in the northern Barents and Kara Seas. In Greenlandic coastal waters the meteoric water contribution is influenced by Greenland ice sheet...

  18. The theory, direction, and magnitude of ecosystem fire probability as constrained by precipitation and temperature.

    Science.gov (United States)

    Guyette, Richard; Stambaugh, Michael C; Dey, Daniel; Muzika, Rose Marie

    2017-01-01

    The effects of climate on wildland fire confronts society across a range of different ecosystems. Water and temperature affect the combustion dynamics, irrespective of whether those are associated with carbon fueled motors or ecosystems, but through different chemical, physical, and biological processes. We use an ecosystem combustion equation developed with the physical chemistry of atmospheric variables to estimate and simulate fire probability and mean fire interval (MFI). The calibration of ecosystem fire probability with basic combustion chemistry and physics offers a quantitative method to address wildland fire in addition to the well-studied forcing factors such as topography, ignition, and vegetation. We develop a graphic analysis tool for estimating climate forced fire probability with temperature and precipitation based on an empirical assessment of combustion theory and fire prediction in ecosystems. Climate-affected fire probability for any period, past or future, is estimated with given temperature and precipitation. A graphic analyses of wildland fire dynamics driven by climate supports a dialectic in hydrologic processes that affect ecosystem combustion: 1) the water needed by plants to produce carbon bonds (fuel) and 2) the inhibition of successful reactant collisions by water molecules (humidity and fuel moisture). These two postulates enable a classification scheme for ecosystems into three or more climate categories using their position relative to change points defined by precipitation in combustion dynamics equations. Three classifications of combustion dynamics in ecosystems fire probability include: 1) precipitation insensitive, 2) precipitation unstable, and 3) precipitation sensitive. All three classifications interact in different ways with variable levels of temperature.

  19. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  20. Quantitative analysis chemistry

    International Nuclear Information System (INIS)

    Ko, Wansuk; Lee, Choongyoung; Jun, Kwangsik; Hwang, Taeksung

    1995-02-01

    This book is about quantitative analysis chemistry. It is divided into ten chapters, which deal with the basic conception of material with the meaning of analysis chemistry and SI units, chemical equilibrium, basic preparation for quantitative analysis, introduction of volumetric analysis, acid-base titration of outline and experiment examples, chelate titration, oxidation-reduction titration with introduction, titration curve, and diazotization titration, precipitation titration, electrometric titration and quantitative analysis.

  1. The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim

    2014-09-01

    The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data. © The Author(s) 2014.

  2. Probabilistic quantitative microbial risk assessment model of norovirus from wastewater irrigated vegetables in Ghana using genome copies and fecal indicator ratio conversion for estimating exposure dose

    DEFF Research Database (Denmark)

    Owusu-Ansah, Emmanuel de-Graft Johnson; Sampson, Angelina; Amponsah, Samuel K.

    2017-01-01

    physical and environmental factors that might influence the reliability of using indicator organisms in microbial risk assessment. The challenges facing analytical studies on virus enumeration (genome copies or particles) have contributed to the already existing lack of data in QMRA modelling. This study......The need to replace the commonly applied fecal indicator conversions ratio (an assumption of 1:10− 5 virus to fecal indicator organism) in Quantitative Microbial Risk Assessment (QMRA) with models based on quantitative data on the virus of interest has gained prominence due to the different...... attempts to fit a QMRA model to genome copies of norovirus data. The model estimates the risk of norovirus infection from the intake of vegetables irrigated with wastewater from different sources. The results were compared to the results of a corresponding model using the fecal indicator conversion ratio...

  3. Average Estimates of Water-Budget Components Based on Hydrograph Separation and PRISM Precipitation for Gaged Basins in the Appalachian Plateaus Region, 1900-2011

    Data.gov (United States)

    Department of the Interior — As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, estimates of annual water-budget components were...

  4. Annual Estimates of Water-Budget Components Based on Hydrograph Separation and PRISM Precipitation for Gaged Basins in the Appalachian Plateaus Region, 1900-2011

    Data.gov (United States)

    Department of the Interior — As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, estimates of annual water-budget components were...

  5. Stereological estimation of nuclear volume and other quantitative histopathological parameters in the prognostic evaluation of supraglottic laryngeal squamous cell carcinoma

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Bennedbaek, O; Pilgaard, J

    1989-01-01

    The aim of this study was to investigate various approaches to the grading of malignancy in pre-treatment biopsies from patients with supraglottic laryngeal squamous cell carcinoma. The prospects of objective malignancy grading based on stereological estimation of the volume-weighted mean nuclear...... volume, nuclear Vv, and nuclear volume fraction, Vv(nuc/tis), along with morphometrical 2-dimensional estimation of nuclear density index, NI, and mitotic activity index, MI, were investigated and compared with the current morphological, multifactorial grading system. The reproducibility among two...... observers of the latter was poor in the material which consisted of 35 biopsy specimens. Unbiased estimates of nuclear Vv were on the average 385 microns3 (CV = 0.44), with more than 90% of the associated variance attributable to differences in nuclear Vv among individual lesions. Nuclear Vv was positively...

  6. Technique for Determination of Rational Boundaries in Combining Construction and Installation Processes Based on Quantitative Estimation of Technological Connections

    Science.gov (United States)

    Gusev, E. V.; Mukhametzyanov, Z. R.; Razyapov, R. V.

    2017-11-01

    The problems of the existing methods for the determination of combining and technologically interlinked construction processes and activities are considered under the modern construction conditions of various facilities. The necessity to identify common parameters that characterize the interaction nature of all the technology-related construction and installation processes and activities is shown. The research of the technologies of construction and installation processes for buildings and structures with the goal of determining a common parameter for evaluating the relationship between technologically interconnected processes and construction works are conducted. The result of this research was to identify the quantitative evaluation of interaction construction and installation processes and activities in a minimum technologically necessary volume of the previous process allowing one to plan and organize the execution of a subsequent technologically interconnected process. The quantitative evaluation is used as the basis for the calculation of the optimum range of the combination of processes and activities. The calculation method is based on the use of the graph theory. The authors applied a generic characterization parameter to reveal the technological links between construction and installation processes, and the proposed technique has adaptive properties which are key for wide use in organizational decisions forming. The article provides a written practical significance of the developed technique.

  7. A quantitative real time polymerase chain reaction approach for estimating processed animal proteins in feed: preliminary data

    Directory of Open Access Journals (Sweden)

    Maria Cesarina Abete

    2013-04-01

    Full Text Available Lifting of the ban on the use of processed animal proteins (PAPs from non-ruminants in non-ruminant feed is in the wind, avoiding intraspecies recycling. Discrimination of species will be performed through polymerase chain reaction (PCR, which is at a moment a merely qualitative method. Nevertheless, quantification of PAPs in feed is needed. The aim of this study was to approach the quantitative determination of PAPs in feed through Real Time (RT-PCR technique; three different protocols picked up from the literature were tested. Three different kind of matrices were examined: pure animal meals (bovine, chicken and pork; one feed sample certified by the European reference laboratory on animal proteins (EURL AP in feed spiked with 0.1% bovine meal; and genomic DNAs from bovine, chicken and pork muscles. The limit of detection (LOD of the three protocols was set up. All the results obtained from the three protocols considered failed in the quantification process, most likely due to the uncertain copy numbers of the analytical targets chosen. This preliminary study will allow us to address further investigations, with the purpose of developing a RT-PCR quantitative method.

  8. A Concurrent Mixed Methods Approach to Examining the Quantitative and Qualitative Meaningfulness of Absolute Magnitude Estimation Scales in Survey Research

    Science.gov (United States)

    Koskey, Kristin L. K.; Stewart, Victoria C.

    2014-01-01

    This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…

  9. A study on deep geological environment for the radwaste disposal - Estimation of roughness for the quantitative analysis of fracture transmissivity

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Yul; Kim, J. Y.; Kim, Y. S.; Hyun, H. J. [Korea Institute of Geology, Mining and Materials, Taejon (Korea)

    2000-03-01

    Estimation of fracture roughness-as one of the basic hydraulic fracture parameters - is very important in assessing ground water flow described by using discrete fracture network modeling. Former manual estimation of the roughness for each fracture surface of drill cores is above all a tedious, time-consuming work and will often cause some ambiguities of roughness interpretation partly due to the subjective judgements of observers, and partly due to the measuring procedure itself. However, recently, indebt to the highly reliable Televiewer data for the fracture discrimination, it has led to a guess to develop a relationship between the traditional roughness method based on a linear profiles and the method from the Televiewer image based on a ellipsoidal profile. Hence, the aim of this work is to develop an automatic evaluation algorithm for measuring the roughness from the Televiewer images. A highly reliable software named 'RAF' has been developed and realized to the extent that its utility merits. In the developing procedure, various problems - such as the examination of a new base line(ellipsoidal) for measuring the unevenness of fracture, the elimination of overlapping fracture signatures or noise, the wavelet estimation according to the type of fractures and the digitalization of roughness etc. - were considered. With these consideration in mind, the newly devised algorithm for the estimation of roughness curves showed a great potential not only for avoiding ambiguities of roughness interpretation but also for the judgement of roughness classification. 12 refs., 23 figs. (Author)

  10. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    International Nuclear Information System (INIS)

    He Bin; Du Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-01-01

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT/CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans. Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of the

  11. Bayesian estimation and use of high-throughput remote sensing indices for quantitative genetic analyses of leaf growth.

    Science.gov (United States)

    Baker, Robert L; Leong, Wen Fung; An, Nan; Brock, Marcus T; Rubin, Matthew J; Welch, Stephen; Weinig, Cynthia

    2018-02-01

    We develop Bayesian function-valued trait models that mathematically isolate genetic mechanisms underlying leaf growth trajectories by factoring out genotype-specific differences in photosynthesis. Remote sensing data can be used instead of leaf-level physiological measurements. Characterizing the genetic basis of traits that vary during ontogeny and affect plant performance is a major goal in evolutionary biology and agronomy. Describing genetic programs that specifically regulate morphological traits can be complicated by genotypic differences in physiological traits. We describe the growth trajectories of leaves using novel Bayesian function-valued trait (FVT) modeling approaches in Brassica rapa recombinant inbred lines raised in heterogeneous field settings. While frequentist approaches estimate parameter values by treating each experimental replicate discretely, Bayesian models can utilize information in the global dataset, potentially leading to more robust trait estimation. We illustrate this principle by estimating growth asymptotes in the face of missing data and comparing heritabilities of growth trajectory parameters estimated by Bayesian and frequentist approaches. Using pseudo-Bayes factors, we compare the performance of an initial Bayesian logistic growth model and a model that incorporates carbon assimilation (A max ) as a cofactor, thus statistically accounting for genotypic differences in carbon resources. We further evaluate two remotely sensed spectroradiometric indices, photochemical reflectance (pri2) and MERIS Terrestrial Chlorophyll Index (mtci) as covariates in lieu of A max , because these two indices were genetically correlated with A max across years and treatments yet allow much higher throughput compared to direct leaf-level gas-exchange measurements. For leaf lengths in uncrowded settings, including A max improves model fit over the initial model. The mtci and pri2 indices also outperform direct A max measurements. Of particular

  12. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    OpenAIRE

    Frank M. You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier

    2016-01-01

    The type 2 modified augmented design (MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic ...

  13. Hawaii Clean Energy Initiative (HCEI) Scenario Analysis: Quantitative Estimates Used to Facilitate Working Group Discussions (2008-2010)

    Energy Technology Data Exchange (ETDEWEB)

    Braccio, R.; Finch, P.; Frazier, R.

    2012-03-01

    This report provides details on the Hawaii Clean Energy Initiative (HCEI) Scenario Analysis to identify potential policy options and evaluate their impact on reaching the 70% HECI goal, present possible pathways to attain the goal based on currently available technology, with an eye to initiatives under way in Hawaii, and provide an 'order-of-magnitude' cost estimate and a jump-start to action that would be adjusted with a better understanding of the technologies and market.

  14. Apparent polyploidization after gamma irradiation: pitfalls in the use of quantitative polymerase chain reaction (qPCR) for the estimation of mitochondrial and nuclear DNA gene copy numbers.

    Science.gov (United States)

    Kam, Winnie W Y; Lake, Vanessa; Banos, Connie; Davies, Justin; Banati, Richard

    2013-05-30

    Quantitative polymerase chain reaction (qPCR) has been widely used to quantify changes in gene copy numbers after radiation exposure. Here, we show that gamma irradiation ranging from 10 to 100 Gy of cells and cell-free DNA samples significantly affects the measured qPCR yield, due to radiation-induced fragmentation of the DNA template and, therefore, introduces errors into the estimation of gene copy numbers. The radiation-induced DNA fragmentation and, thus, measured qPCR yield varies with temperature not only in living cells, but also in isolated DNA irradiated under cell-free conditions. In summary, the variability in measured qPCR yield from irradiated samples introduces a significant error into the estimation of both mitochondrial and nuclear gene copy numbers and may give spurious evidence for polyploidization.

  15. Quantitative estimation of cholinesterase-specific drug metabolism of carbamate inhibitors provided by the analysis of the area under the inhibition-time curve.

    Science.gov (United States)

    Zhou, Huimin; Xiao, Qiaoling; Tan, Wen; Zhan, Yiyi; Pistolozzi, Marco

    2017-09-10

    Several molecules containing carbamate groups are metabolized by cholinesterases. This metabolism includes a time-dependent catalytic step which temporary inhibits the enzymes. In this paper we demonstrate that the analysis of the area under the inhibition versus time curve (AUIC) can be used to obtain a quantitative estimation of the amount of carbamate metabolized by the enzyme. (R)-bambuterol monocarbamate and plasma butyrylcholinesterase were used as model carbamate-cholinesterase system. The inhibition of different concentrations of the enzyme was monitored for 5h upon incubation with different concentrations of carbamate and the resulting AUICs were analyzed. The amount of carbamate metabolized could be estimated with cholinesterases in a selected compartment in which the cholinesterase is confined (e.g. in vitro solutions, tissues or body fluids), either in vitro or in vivo. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Toward Quantitative Estimation of the Effect of Aerosol Particles in the Global Climate Model and Cloud Resolving Model

    Science.gov (United States)

    Eskes, H.; Boersma, F.; Dirksen, R.; van der A, R.; Veefkind, P.; Levelt, P.; Brinksma, E.; van Roozendael, M.; de Smedt, I.; Gleason, J.

    2005-05-01

    Based on measurements of GOME on ESA ERS-2, SCIAMACHY on ESA-ENVISAT, and Ozone Monitoring Instrument (OMI) on the NASA EOS-Aura satellite there is now a unique 11-year dataset of global tropospheric nitrogen dioxide measurements from space. The retrieval approach consists of two steps. The first step is an application of the DOAS (Differential Optical Absorption Spectroscopy) approach which delivers the total absorption optical thickness along the light path (the slant column). For GOME and SCIAMACHY this is based on the DOAS implementation developed by BIRA/IASB. For OMI the DOAS implementation was developed in a collaboration between KNMI and NASA. The second retrieval step, developed at KNMI, estimates the tropospheric vertical column of NO2 based on the slant column, cloud fraction and cloud top height retrieval, stratospheric column estimates derived from a data assimilation approach and vertical profile estimates from space-time collocated profiles from the TM chemistry-transport model. The second step was applied with only minor modifications to all three instruments to generate a uniform 11-year data set. In our talk we will address the following topics: - A short summary of the retrieval approach and results - Comparisons with other retrievals - Comparisons with global and regional-scale models - OMI-SCIAMACHY and SCIAMACHY-GOME comparisons - Validation with independent measurements - Trend studies of NO2 for the past 11 years

  17. A new computational scheme on quantitative inner pipe boundary identification based on the estimation of effective thermal conductivity

    International Nuclear Information System (INIS)

    Fan Chunli; Sun Fengrui; Yang Li

    2008-01-01

    In the paper, the irregular configuration of the inner pipe boundary is identified based on the estimation of the circumferential distribution of the effective thermal conductivity of pipe wall. In order to simulate the true temperature measurement in the numerical examples, the finite element method is used to calculate the temperature distribution at the outer pipe surface based on the irregular shaped inner pipe boundary to be determined. Then based on this simulated temperature distribution the inverse identification work is conducted by employing the modified one-dimensional correction method, along with the finite volume method, to estimate the circumferential distribution of the effective thermal conductivity of the pipe wall. Thereafter, the inner pipe boundary shape is calculated based on the conductivity estimation result. A series of numerical experiments with different temperature measurement errors and different thermal conductivities of pipe wall have certified the effectiveness of the method. It is proved that the method is a simple, fast and accurate one for this inverse heat conduction problem.

  18. Quantitative estimation of renal function with dynamic contrast-enhanced MRI using a modified two-compartment model.

    Directory of Open Access Journals (Sweden)

    Bin Chen

    Full Text Available To establish a simple two-compartment model for glomerular filtration rate (GFR and renal plasma flow (RPF estimations by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.A total of eight New Zealand white rabbits were included in DCE-MRI. The two-compartment model was modified with the impulse residue function in this study. First, the reliability of GFR measurement of the proposed model was compared with other published models in Monte Carlo simulation at different noise levels. Then, functional parameters were estimated in six healthy rabbits to test the feasibility of the new model. Moreover, in order to investigate its validity of GFR estimation, two rabbits underwent acute ischemia surgical procedure in unilateral kidney before DCE-MRI, and pixel-wise measurements were implemented to detect the cortical GFR alterations between normal and abnormal kidneys.The lowest variability of GFR and RPF measurements were found in the proposed model in the comparison. Mean GFR was 3.03±1.1 ml/min and mean RPF was 2.64±0.5 ml/g/min in normal animals, which were in good agreement with the published values. Moreover, large GFR decline was found in dysfunction kidneys comparing to the contralateral control group.Results in our study demonstrate that measurement of renal kinetic parameters based on the proposed model is feasible and it has the ability to discriminate GFR changes in healthy and diseased kidneys.

  19. Satellite-Based Precipitation Datasets

    Science.gov (United States)

    Munchak, S. J.; Huffman, G. J.

    2017-12-01

    Of the possible sources of precipitation data, those based on satellites provide the greatest spatial coverage. There is a wide selection of datasets, algorithms, and versions from which to choose, which can be confusing to non-specialists wishing to use the data. The International Precipitation Working Group (IPWG) maintains tables of the major publicly available, long-term, quasi-global precipitation data sets (http://www.isac.cnr.it/ ipwg/data/datasets.html), and this talk briefly reviews the various categories. As examples, NASA provides two sets of quasi-global precipitation data sets: the older Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) and current Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) mission (IMERG). Both provide near-real-time and post-real-time products that are uniformly gridded in space and time. The TMPA products are 3-hourly 0.25°x0.25° on the latitude band 50°N-S for about 16 years, while the IMERG products are half-hourly 0.1°x0.1° on 60°N-S for over 3 years (with plans to go to 16+ years in Spring 2018). In addition to the precipitation estimates, each data set provides fields of other variables, such as the satellite sensor providing estimates and estimated random error. The discussion concludes with advice about determining suitability for use, the necessity of being clear about product names and versions, and the need for continued support for satellite- and surface-based observation.

  20. Probabilistic quantitative microbial risk assessment model of norovirus from wastewater irrigated vegetables in Ghana using genome copies and fecal indicator ratio conversion for estimating exposure dose.

    Science.gov (United States)

    Owusu-Ansah, Emmanuel de-Graft Johnson; Sampson, Angelina; Amponsah, Samuel K; Abaidoo, Robert C; Dalsgaard, Anders; Hald, Tine

    2017-12-01

    The need to replace the commonly applied fecal indicator conversions ratio (an assumption of 1:10 -5 virus to fecal indicator organism) in Quantitative Microbial Risk Assessment (QMRA) with models based on quantitative data on the virus of interest has gained prominence due to the different physical and environmental factors that might influence the reliability of using indicator organisms in microbial risk assessment. The challenges facing analytical studies on virus enumeration (genome copies or particles) have contributed to the already existing lack of data in QMRA modelling. This study attempts to fit a QMRA model to genome copies of norovirus data. The model estimates the risk of norovirus infection from the intake of vegetables irrigated with wastewater from different sources. The results were compared to the results of a corresponding model using the fecal indicator conversion ratio to estimate the norovirus count. In all scenarios of using different water sources, the application of the fecal indicator conversion ratio underestimated the norovirus disease burden, measured by the Disability Adjusted Life Years (DALYs), when compared to results using the genome copies norovirus data. In some cases the difference was >2 orders of magnitude. All scenarios using genome copies met the 10 -4 DALY per person per year for consumption of vegetables irrigated with wastewater, although these results are considered to be highly conservative risk estimates. The fecal indicator conversion ratio model of stream-water and drain-water sources of wastewater achieved the 10 -6 DALY per person per year threshold, which tends to indicate an underestimation of health risk when compared to using genome copies for estimating the dose. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Identification and quantitative grade estimation of Uranium mineralization based on gross-count gamma ray log at Lemajung sector West Kalimantan

    International Nuclear Information System (INIS)

    Adi Gunawan Muhammad

    2014-01-01

    Lemajung sector, is one of uranium potential sector in Kalan Area, West Kalimantan. Uranium mineralization is found in metasiltstone and schistose metapelite rock with general direction of mineralization east - west tilted ± 70° to the north parallel with schistocity pattern (S1). Drilling evaluation has been implemented in 2013 in Lemajung sector at R-05 (LEML-(S1). Drilling evaluation has been implemented in 2013 in Lemajung sector at R-05 (LEML-gamma ray. The purpose of this activity is to determine uranium mineralization grade with quantitatively methode in the rocks and also determine the geological conditions in sorounding of drilling area. The methodology involves determining the value of k-factor, geological mapping for the sorounding of drill hole, determination of the thickness and grade estimation of uranium mineralization with gross-count gamma ray. Quantitatively from grade estimation of uranium using gross-count gamma ray log can be known that the highest % eU_3O_8 in the hole R-05 (LEML-40) reaches 0.7493≈6354 ppm eU found at depth interval from 30.1 to 34.96 m. Uranium mineralization is present as fracture filling (vein) or tectonic breccia matrix filling in metasiltstone with thickness from 0.10 to 2.40 m associated with sulphide (pyrite) and characterized by high ratio of U/Th. (author)

  2. Comparison of quantitative estimation of intracerebral hemorrhage and infarct volumes after thromboembolism in an embolic stroke model

    DEFF Research Database (Denmark)

    Eriksen, Nina; Rasmussen, Rune Skovgaard; Overgaard, Karsten

    2014-01-01

    . Group 1 was treated with saline, and group 2 was treated with 20 mg/kg recombinant tissue plasminogen activator to promote intracerebral hemorrhages. Stereology, semiautomated computer estimation, and manual erythrocyte counting were used to test the precision and efficiency of determining the size...... measurements, the stereological method was the most efficient and advantageous. CONCLUSIONS: We found that stereology was the superior method for quantification of hemorrhagic volume, especially for rodent petechial bleeding, which is otherwise difficult to measure. Our results suggest the possibility...

  3. Liquid chromatography/tandem mass spectrometry method for quantitative estimation of solutol HS15 and its applications

    OpenAIRE

    Bhaskar, V. Vijaya; Middha, Anil; Srivastava, Pratima; Rajagopal, Sriram

    2015-01-01

    A rapid, sensitive and selective pseudoMRM (pMRM)-based method for the determination of solutol HS15 (SHS15) in rat plasma was developed using liquid chromatography/tandem mass spectrometry (LCâMS/MS). The most abundant ions corresponding to SHS15 free polyethyleneglycol (PEG) oligomers at m/z 481, 525, 569, 613, 657, 701, 745, 789, 833, 877, 921 and 965 were selected for pMRM in electrospray mode of ionization. Purity of the lipophilic and hydrophilic components of SHS15 was estimated using ...

  4. Quantitative estimation of electro-osmosis force on charged particles inside a borosilicate resistive-pulse sensor.

    Science.gov (United States)

    Ghobadi, Mostafa; Yuqian Zhang; Rana, Ankit; Esfahani, Ehsan T; Esfandiari, Leyla

    2016-08-01

    Nano and micron-scale pore sensors have been widely used for biomolecular sensing application due to its sensitive, label-free and potentially cost-effective criteria. Electrophoretic and electroosmosis are major forces which play significant roles on the sensor's performance. In this work, we have developed a mathematical model based on experimental and simulation results of negatively charged particles passing through a 2μm diameter solid-state borosilicate pore under a constant applied electric field. The mathematical model has estimated the ratio of electroosmosis force to electrophoretic force on particles to be 77.5%.

  5. Modeling number of bacteria per food unit in comparison to bacterial concentration in quantitative risk assessment: impact on risk estimates.

    Science.gov (United States)

    Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin

    2015-02-01

    When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.

  6. SU-G-IeP1-06: Estimating Relative Tissue Density From Quantitative MR Images: A Novel Perspective for MRI-Only Heterogeneity Corrected Dose Calculation

    International Nuclear Information System (INIS)

    Soliman, A; Hashemi, M; Safigholi, H; Tchistiakova, E; Song, W

    2016-01-01

    Purpose: To explore the feasibility of extracting the relative density from quantitative MRI measurements as well as estimate a correlation between the extracted measures and CT Hounsfield units. Methods: MRI has the ability to separate water and fat signals, producing two separate images for each component. By performing appropriate corrections on the separated images, quantitative measurement of water and fat mass density can be estimated. This work aims to test this hypothesis on 1.5T.Peanut oil was used as fat-representative, while agar as water-representative. Gadolinium Chloride III and Sodium Chloride were added to the agar solution to adjust the relaxation times and the medium conductivity, respectively. Peanut oil was added to the agar solution with different percentages: 0%, 3%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. The phantom was scanned on 1.5T GE Optima 450W with the body coil using a multigradient echo sequences. Water/fat separation were performed while correcting for main field (B0) inhomogeneity and T_2* relaxation time. B1+ inhomogeneities were ignored. The phantom was subsequently scanned on a Philips Brilliance CT Big Bore. MR-corrected fat signal from all vials were normalized to 100% fat signal. CT Hounsfield values were then compared to those obtained from the normalized MR-corrected fat values as well as to the phantom for validation. Results: Good agreement were found between CT HU and the MR-extracted fat values (R"2 = 0.98). CT HU also showed excellent agreement with the prepared fat fractions (R"2=0.99). Vials with 70%, 80%, and 90% fat percentages showed inhomogeneous distributions, however their results were included for completion. Conclusion: Quantitative MRI water/fat imaging can be potentially used to extract the relative tissue density. Further in-vivo validation are required.

  7. SU-G-IeP1-06: Estimating Relative Tissue Density From Quantitative MR Images: A Novel Perspective for MRI-Only Heterogeneity Corrected Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Soliman, A; Hashemi, M; Safigholi, H [Sunnybrook Research Institute, Toronto, ON (Canada); Sunnybrook Health Sciences Centre, Toronto, ON (Canada); Tchistiakova, E [Sunnybrook Health Sciences Centre, Toronto, ON (Canada); University of Toronto, Toronto, ON (Canada); Song, W [Sunnybrook Research Institute, Toronto, ON (Canada); Sunnybrook Health Sciences Centre, Toronto, ON (Canada); University of Toronto, Toronto, ON (Canada)

    2016-06-15

    Purpose: To explore the feasibility of extracting the relative density from quantitative MRI measurements as well as estimate a correlation between the extracted measures and CT Hounsfield units. Methods: MRI has the ability to separate water and fat signals, producing two separate images for each component. By performing appropriate corrections on the separated images, quantitative measurement of water and fat mass density can be estimated. This work aims to test this hypothesis on 1.5T.Peanut oil was used as fat-representative, while agar as water-representative. Gadolinium Chloride III and Sodium Chloride were added to the agar solution to adjust the relaxation times and the medium conductivity, respectively. Peanut oil was added to the agar solution with different percentages: 0%, 3%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100%. The phantom was scanned on 1.5T GE Optima 450W with the body coil using a multigradient echo sequences. Water/fat separation were performed while correcting for main field (B0) inhomogeneity and T{sub 2}* relaxation time. B1+ inhomogeneities were ignored. The phantom was subsequently scanned on a Philips Brilliance CT Big Bore. MR-corrected fat signal from all vials were normalized to 100% fat signal. CT Hounsfield values were then compared to those obtained from the normalized MR-corrected fat values as well as to the phantom for validation. Results: Good agreement were found between CT HU and the MR-extracted fat values (R{sup 2} = 0.98). CT HU also showed excellent agreement with the prepared fat fractions (R{sup 2}=0.99). Vials with 70%, 80%, and 90% fat percentages showed inhomogeneous distributions, however their results were included for completion. Conclusion: Quantitative MRI water/fat imaging can be potentially used to extract the relative tissue density. Further in-vivo validation are required.

  8. Retrospective Analog Year Analyses Using NASA Satellite Precipitation and Soil Moisture Data to Improve USDA's World Agricultural Supply and Demand Estimates

    Science.gov (United States)

    Teng, William; Shannon, Harlan; Mladenova, Iliana; Fang, Fan

    2010-01-01

    A primary goal of the U.S. Department of Agriculture (USDA) is to expand markets for U.S. agricultural products and support global economic development. The USDA World Agricultural Outlook Board (WAOB) supports this goal by coordinating monthly World Agricultural Supply and Demand Estimates (WASDE) for the U.S. and major foreign producing countries. Because weather has a significant impact on crop progress, conditions, and production, WAOB prepares frequent agricultural weather assessments, in a GIS-based, Global Agricultural Decision Support Environment (GLADSE). The main goal of this project, thus, is to improve WAOB's estimates by integrating NASA remote sensing soil moisture observations and research results into GLADSE (See diagram below). Soil moisture is currently a primary data gap at WAOB.

  9. Quantitative estimates of coral reef substrate and species type derived objectively from photographic images taken at twenty-eight sites in the Hawaiian islands, 2002-2004 (NODC Accession 0002313)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of CRAMP surveys taken in 2002-2004 and includes quantitative estimates of substrate and species type. From the data percent coverage of a...

  10. An attempt and significance of using scandium (Sc) indication for quantitative estimation of soil ingested by pastured cattle

    International Nuclear Information System (INIS)

    Koyama, Takeo; Sudo, Madoka; Miyamoto, Susumu; Kikuchi, Takeaki; Takahashi, Masayoshi; Kuma, Tadashi.

    1985-01-01

    Pastured beef cattle constantly ingest soil together with grass. Dried grass and silage used in winter also contain some soil. Sc occurs in soil in much greater amounts than in grass and is not absorbed by digestive canals, and the Sc content can be determined accuretely by the activation analysis method. In view of this, a technique is devised which uses Sc as an indication in estimating the amount of soil ingested by cattle, and this new method is found to be better than the conventional one with Ti indication. Accordingly, dung is collected from the same cattle at the end of the pastured and housed periods. The dung samples are dried, ground, activated and analysed. On the basis of results of this analysis, the amount of soil ingested at the end of the pastured and housed periods is estimated at 106 +- 120 and 129 +- 171 g/day, respectively, which broadly agree with values previously reported. An evaluation of the amounts of Se and Zn taken by cattle from soil is also carried out. (Nogami, K.)

  11. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  12. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Science.gov (United States)

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  13. Quantitative Estimation of Above Ground Crop Biomass using Ground-based, Airborne and Spaceborne Low Frequency Polarimetric Synthetic Aperture Radar

    Science.gov (United States)

    Koyama, C.; Watanabe, M.; Shimada, M.

    2016-12-01

    Estimation of crop biomass is one of the important challenges in environmental remote sensing related to agricultural as well as hydrological and meteorological applications. Usually passive optical data (photographs, spectral data) operating in the visible and near-infrared bands is used for such purposes. The virtue of optical remote sensing for yield estimation, however, is rather limited as the visible light can only provide information about the chemical characteristics of the canopy surface. Low frequency microwave signals with wavelength longer 20 cm have the potential to penetrate through the canopy and provide information about the whole vertical structure of vegetation from the top of the canopy down to the very soil surface. This phenomenon has been well known and exploited to detect targets under vegetation in the military radar application known as FOPEN (foliage penetration). With the availability of polarimetric interferometric SAR data the use PolInSAR techniques to retrieve vertical vegetation structures has become an attractive tool. However, PolInSAR is still highly experimental and suitable data is not yet widely available. In this study we focus on the use of operational dual-polarization L-band (1.27 GHz) SAR which is since the launch of Japan's Advanced Land Observing Satellite (ALOS, 2006-2011) available worldwide. Since 2014 ALOS-2 continues to deliver such kind of partial polarimetric data for the entire land surface. In addition to these spaceborne data sets we use airborne L-band SAR data acquired by the Japanese Pi-SAR-L2 as well as ultra-wideband (UWB) ground based SAR data operating in the frequency range from 1-4 GHz. By exploiting the complex dual-polarization [C2] Covariance matrix information, the scattering contributions from the canopy can be well separated from the ground reflections allowing for the establishment of semi-empirical relationships between measured radar reflectivity and the amount of fresh-weight above

  14. Estimation of pulmonary artery pressure in patients with primary pulmonary hypertension by quantitative analysis of magnetic resonance images.

    Science.gov (United States)

    Murray, T I; Boxt, L M; Katz, J; Reagan, K; Barst, R J

    1994-01-01

    The use of magnetic resonance (MR) images for estimating mean pulmonary artery pressure (PAP) was tested by comparing main pulmonary artery (MPA) and middescending thoracic aorta (AO) caliber in 12 patients with primary pulmonary hypertension (PPH) with measurements made in eight other patients who were observed for diseases other than heart disease (controls). The ratio MPA/AO and the ratios of vessel caliber normalized to body surface area (MPAI and AOI, respectively) were computed. The PAP was obtained in all PPH patients and compared with caliber measurements. The PPH MPA (3.6 +/- 0.8 cm) was significantly larger than the control MPA (2.9 +/- 0.3 cm, p = 0.02); the PPH MPAI (2.8 +/- 0.7 cm/M2) was significantly greater than the control MPA (1.7 +/- 0.2 cm/M2, p < 0.0001). Control AO (2.2 +/- 0.3 cm) was significantly greater than PPH AO (1.6 +/- 0.4 cm, p < 0.0001); there was no significant difference between control AOI (1.3 +/- 0.2 cm/M2) and PPH AOI (1.2 +/- 0.2 cm/M2, p = 0.25). The PPH MPA/AO (2.3 +/- 0.6) was significantly greater than the control MPA/AO (1.3 +/- 0.1, p < 0.0001); overlap between MPA in the two groups was eliminated by indexing values to AO caliber (MPA/AO). Among PPH patients there was strong correlation between PAP and MPA/AO (PAP = 24 x MPA/AO + 3.7, r = 0.7, p < 0.01). Increased MPA/AO denotes the presence of pulmonary hypertension and may be used to estimate PAP.

  15. Secondary dentine as a sole parameter for age estimation: Comparison and reliability of qualitative and quantitative methods among North Western adult Indians

    Directory of Open Access Journals (Sweden)

    Jasbir Arora

    2016-06-01

    Full Text Available The indestructible nature of teeth against most of the environmental abuses makes its use in disaster victim identification (DVI. The present study has been undertaken to examine the reliability of Gustafson’s qualitative method and Kedici’s quantitative method of measuring secondary dentine for age estimation among North Western adult Indians. 196 (M = 85; F = 111 single rooted teeth were collected from the Department of Oral Health Sciences, PGIMER, Chandigarh. Ground sections were prepared and the amount of secondary dentine formed was scored qualitatively according to Gustafson’s (0–3 scoring system (method 1 and quantitatively following Kedici’s micrometric measurement method (method 2. Out of 196 teeth 180 samples (M = 80; F = 100 were found to be suitable for measuring secondary dentine following Kedici’s method. Absolute mean error of age was calculated by both methodologies. Results clearly showed that in pooled data, method 1 gave an error of ±10.4 years whereas method 2 exhibited an error of approximately ±13 years. A statistically significant difference was noted in absolute mean error of age between two methods of measuring secondary dentine for age estimation. Further, it was also revealed that teeth extracted for periodontal reasons severely decreased the accuracy of Kedici’s method however, the disease had no effect while estimating age by Gustafson’s method. No significant gender differences were noted in the absolute mean error of age by both methods which suggest that there is no need to separate data on the basis of gender.

  16. Assessing Hourly Precipitation Forecast Skill with the Fractions Skill Score

    Science.gov (United States)

    Zhao, Bin; Zhang, Bo

    2018-02-01

    Statistical methods for category (yes/no) forecasts, such as the Threat Score, are typically used in the verification of precipitation forecasts. However, these standard methods are affected by the so-called "double-penalty" problem caused by slight displacements in either space or time with respect to the observations. Spatial techniques have recently been developed to help solve this problem. The fractions skill score (FSS), a neighborhood spatial verification method, directly compares the fractional coverage of events in windows surrounding the observations and forecasts. We applied the FSS to hourly precipitation verification by taking hourly forecast products from the GRAPES (Global/Regional Assimilation Prediction System) regional model and quantitative precipitation estimation products from the National Meteorological Information Center of China during July and August 2016, and investigated the difference between these results and those obtained with the traditional category score. We found that the model spin-up period affected the assessment of stability. Systematic errors had an insignificant role in the fraction Brier score and could be ignored. The dispersion of observations followed a diurnal cycle and the standard deviation of the forecast had a similar pattern to the reference maximum of the fraction Brier score. The coefficient of the forecasts and the observations is similar to the FSS; that is, the FSS may be a useful index that can be used to indicate correlation. Compared with the traditional skill score, the FSS has obvious advantages in distinguishing differences in precipitation time series, especially in the assessment of heavy rainfall.

  17. Quantitative testing of the methodology for genome size estimation in plants using flow cytometry: a case study of the Primulina genus

    Directory of Open Access Journals (Sweden)

    Jing eWang

    2015-05-01

    Full Text Available Flow cytometry (FCM is a commonly used method for estimating genome size in many organisms. The use of flow cytometry in plants is influenced by endogenous fluorescence inhibitors and may cause an inaccurate estimation of genome size; thus, falsifying the relationship between genome size and phenotypic traits/ecological performance. Quantitative optimization of FCM methodology minimizes such errors, yet there are few studies detailing this methodology. We selected the genus Primulina, one of the most representative and diverse genera of the Old World Gesneriaceae, to evaluate the methodology effect on determining genome size. Our results showed that buffer choice significantly affected genome size estimation in six out of the eight species examined and altered the 2C-value (DNA content by as much as 21.4%. The staining duration and propidium iodide (PI concentration slightly affected the 2C-value. Our experiments showed better histogram quality when the samples were stained for 40 minutes at a PI concentration of 100 µg ml-1. The quality of the estimates was not improved by one-day incubation in the dark at 4 °C or by centrifugation. Thus, our study determined an optimum protocol for genome size measurement in Primulina: LB01 buffer supplemented with 100 µg ml-1 PI and stained for 40 minutes. This protocol also demonstrated a high universality in other Gesneriaceae genera. We report the genome size of nine Gesneriaceae species for the first time. The results showed substantial genome size variation both within and among the species, with the 2C-value ranging between 1.62 and 2.71 pg. Our study highlights the necessity of optimizing the FCM methodology prior to obtaining reliable genome size estimates in a given taxon.

  18. Liquid chromatography/tandem mass spectrometry method for quantitative estimation of solutol HS15 and its applications

    Directory of Open Access Journals (Sweden)

    V. Vijaya Bhaskar

    2015-04-01

    Full Text Available A rapid, sensitive and selective pseudoMRM (pMRM-based method for the determination of solutol HS15 (SHS15 in rat plasma was developed using liquid chromatography/tandem mass spectrometry (LC–MS/MS. The most abundant ions corresponding to SHS15 free polyethyleneglycol (PEG oligomers at m/z 481, 525, 569, 613, 657, 701, 745, 789, 833, 877, 921 and 965 were selected for pMRM in electrospray mode of ionization. Purity of the lipophilic and hydrophilic components of SHS15 was estimated using evaporative light scattering detector (ELSD. Plasma concentrations of SHS15 were measured after oral administration at 2.50 g/kg dose and intravenous administration at 1.00 g/kg dose in male Sprague Dawley rats. SHS15 has poor oral bioavailability of 13.74% in rats. Differences in pharmacokinetics of oligomers were studied. A novel proposal was conveyed to the scientific community, where formulation excipient could be analyzed as a qualifier in the analysis of new chemical entities (NCEs to address the spiky plasma concentration profiles. Keywords: SHS15, LC–MS/MS, Spiky profiles, Validation

  19. Liquid chromatography/tandem mass spectrometry method for quantitative estimation of solutol HS15 and its applications.

    Science.gov (United States)

    Bhaskar, V Vijaya; Middha, Anil; Srivastava, Pratima; Rajagopal, Sriram

    2015-04-01

    A rapid, sensitive and selective pseudoMRM (pMRM)-based method for the determination of solutol HS15 (SHS15) in rat plasma was developed using liquid chromatography/tandem mass spectrometry (LC-MS/MS). The most abundant ions corresponding to SHS15 free polyethyleneglycol (PEG) oligomers at m / z 481, 525, 569, 613, 657, 701, 745, 789, 833, 877, 921 and 965 were selected for pMRM in electrospray mode of ionization. Purity of the lipophilic and hydrophilic components of SHS15 was estimated using evaporative light scattering detector (ELSD). Plasma concentrations of SHS15 were measured after oral administration at 2.50 g/kg dose and intravenous administration at 1.00 g/kg dose in male Sprague Dawley rats. SHS15 has poor oral bioavailability of 13.74% in rats. Differences in pharmacokinetics of oligomers were studied. A novel proposal was conveyed to the scientific community, where formulation excipient could be analyzed as a qualifier in the analysis of new chemical entities (NCEs) to address the spiky plasma concentration profiles.

  20. Quantitative estimation of the cost of parasitic castration in a Helisoma anceps population using a matrix population model.

    Science.gov (United States)

    Negovetich, N J; Esch, G W

    2008-10-01

    Larval trematodes frequently castrate their snail intermediate hosts. When castrated, the snails do not contribute offspring to the population, yet they persist and compete with the uninfected individuals for the available food resources. Parasitic castration should reduce the population growth rate lambda, but the magnitude of this decrease is unknown. The present study attempted to quantify the cost of parasitic castration at the level of the population by mathematically modeling the population of the planorbid snail Helisoma anceps in Charlie's Pond, North Carolina. Analysis of the model identified the life-history trait that most affects lambda, and the degree to which parasitic castration can lower lambda. A period matrix product model was constructed with estimates of fecundity, survival, growth rates, and infection probabilities calculated in a previous study. Elasticity analysis was performed by increasing the values of the life-history traits by 10% and recording the percentage change in lambda. Parasitic castration resulted in a 40% decrease in lambda of H. anceps. Analysis of the model suggests that decreasing the size at maturity was more effective at reducing the cost of castration than increasing survival or growth rates of the snails. The current matrix model was the first to mathematically describe a snail population, and the predictions of the model are in agreement with published research.

  1. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process

    Science.gov (United States)

    Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.

    2016-06-01

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.

  2. GPM, DPR Level 2A Ka Precipitation V03

    Data.gov (United States)

    National Aeronautics and Space Administration — The 2AKa algorithm provides precipitation estimates from the Ka radar of the Dual-Frequency Precipitation Radar on the core GPM spacecraft. The product contains two...

  3. GPM, DPR Level 2A Ku Precipitation V03

    Data.gov (United States)

    National Aeronautics and Space Administration — The 2AKu algorithm provides precipitation estimates from the Ku radar of the Dual-Frequency Precipitation Radar on the core GPM spacecraft. The product contains one...

  4. Contrast-enhanced 3T MR perfusion of musculoskeletal tumours. T1 value heterogeneity assessment and evaluation of the influence of T1 estimation methods on quantitative parameters

    Energy Technology Data Exchange (ETDEWEB)

    Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Verbizier, Jacques de; Blum, Alain [Hopital Central, CHRU-Nancy, Service d' Imagerie Guilloz, Nancy (France); Chen, Bailiang; Beaumont, Marine [Universite de Lorraine, Laboratoire IADI, UMR S 947, Nancy (France); Badr, Sammy; Cotten, Anne [CHRU Lille Centre de Consultations et d' Imagerie de l' Appareil Locomoteur, Department of Radiology and Musculoskeletal Imaging, Lille (France)

    2017-12-15

    To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. (orig.)

  5. A hybrid method for the estimation of ground motion in sedimentary basins: Quantitative modelling for Mexico City

    International Nuclear Information System (INIS)

    Faeh, D.; Suhadolc, P.; Mueller, S.; Panza, G.F.

    1994-04-01

    To estimate the ground motion in two-dimensional, laterally heterogeneous, anelastic media, a hybrid technique has been developed which combines modal summation and the finite difference method. In the calculation of the local wavefield due to a seismic event, both for small and large epicentral distances, it is possible to take into account the sources, path and local soil effects. As practical application we have simulated the ground motion in Mexico City caused by the Michoacan earthquake of September 19, 1985. By studying the one-dimensional response of the two sedimentary layers present in Mexico City, it is possible to explain the difference in amplitudes observed between records for receivers inside and outside the lake-bed zone. These simple models show that the sedimentary cover produces the concentration of high-frequency waves (0.2-0.5 Hz) on the horizontal components of motion. The large amplitude coda of ground motion observed inside the lake-bed zone, and the spectral ratios between signals observed inside and outside the lake-bed zone, can only be explained by two-dimensional models of the sedimentary basin. In such models, the ground motion is mainly controlled by the response of the uppermost clay layer. The synthetic signals explain the major characteristics (relative amplitudes, spectral ratios, and frequency content) of the observed ground motion. The large amplitude coda of the ground motion observed in the lake-bed zone can be explained as resonance effects and the excitation of local surface waves in the laterally heterogeneous clay layer. Also, for the 1985 Michoacan event, the energy contributions of the three subevents are important to explain the observed durations. (author). 39 refs, 15 figs, 1 tab

  6. Quantitative estimation of the pathways followed in the conversion to glycogen of glucose administered to the fasted rat

    International Nuclear Information System (INIS)

    Scofield, R.F.; Kosugi, K.; Schumann, W.C.; Kumaran, K.; Landau, B.R.

    1985-01-01

    When [6- 3 H,6- 14 C]glucose was given in glucose loads to fasted rats, the average 3 H/ 14 C ratios in the glycogens deposited in their livers, relative to that in the glucoses administered, were 0.85 and 0.88. When [3- 3 H,3- 14 C]lactate was given in trace quantity along with unlabeled glucose loads, the average 3 H/ 14 C ratio in the glycogens deposited was 0.08. This indicates that a major fraction of the carbons of the glucose loads was converted to liver glycogen without first being converted to lactate. When [3- 3 H,6- 14 C]glucose was given in glucose loads, the 3 H/ 14 C ratios in the glycogens deposited averaged 0.44. This indicates that a significant amount of H bound to C-3, but not C-6, of glucose is removed within liver in the conversion of the carbons of the glucose to glycogen. This can occur in the pentose cycle and by cycling of glucose-6-P via triose phosphates. The contributions of these pathways were estimated by giving glucose loads labeled with [1- 14 C]glucose, [2- 14 C]glucose, [5- 14 C]glucose, and [6- 14 C]glucose and degrading the glucoses obtained by hydrolyzing the glycogens that deposited. Between 4 and 9% of the glucose utilized by the liver was utilized in the pentose cycle. While these are relatively small percentages a major portion of the difference between the ratios obtained with [3- 3 H]glucose and with [6- 3 H]glucose is attributable to metabolism in the pentose cycle

  7. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Science.gov (United States)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  8. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Energy Technology Data Exchange (ETDEWEB)

    He Bin [Division of Nuclear Medicine, Department of Radiology, New York Presbyterian Hospital-Weill Medical College of Cornell University, New York, NY 10021 (United States); Frey, Eric C, E-mail: bih2006@med.cornell.ed, E-mail: efrey1@jhmi.ed [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutions, Baltimore, MD 21287-0859 (United States)

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed {sup 111}In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations

  9. Assessment of the Latest GPM-Era High-Resolution Satellite Precipitation Products by Comparison with Observation Gauge Data over the Chinese Mainland

    Directory of Open Access Journals (Sweden)

    Shaowei Ning

    2016-10-01

    Full Text Available The Global Precipitation Mission (GPM Core Observatory that was launched on 27 February 2014 ushered in a new era for estimating precipitation from satellites. Based on their high spatial–temporal resolution and near global coverage, satellite-based precipitation products have been applied in many research fields. The goal of this study was to quantitatively compare two of the latest GPM-era satellite precipitation products (GPM IMERG and GSMap-Gauge Ver. 6 with a network of 840 precipitation gauges over the Chinese mainland. Direct comparisons of satellite-based precipitation products with rain gauge observations over a 20 month period from April 2014 to November 2015 at 0.1° and daily/monthly resolutions showed the following results: Both of the products were capable of capturing the overall spatial pattern of the 20 month mean daily precipitation, which was characterized by a decreasing trend from the southeast to the northwest. GPM IMERG overestimated precipitation by approximately 0.09 mm/day while GSMap-Gauge Ver. 6 underestimated precipitation by −0.04 mm/day. The two satellite-based precipitation products performed better over wet southern regions than over dry northern regions. They also showed better performance in summer than in winter. In terms of mean error, root mean square error, correlation coefficient, and probability of detection, GSMap-Gauge was better able to estimate precipitation and had more stable quality results than GPM IMERG on both daily and monthly scales. GPM IMERG was more sensitive to conditions of no rain or light rainfall and demonstrated good capability of capturing the behavior of extreme precipitation events. Overall, the results revealed some limitations of these two latest satellite-based precipitation products when used over the Chinese mainland, helping to characterize some of the error features in these datasets for potential users.

  10. Quantitative Estimation of Yeast on Maxillary Denture in Patients with Denture Stomatitis and the Effect of Chlorhexidine Gluconate in Reduction of Yeast

    Directory of Open Access Journals (Sweden)

    Jaykumar R Gade

    2011-01-01

    Full Text Available Denture stomatitis is a condition associated with wearing of a denture. The predisposing factor leading to denture stomatitis could be poor oral hygiene, ill-fitting denture and relief areas. Around 30 patients with denture stomatitis were advised to rinse with chlorhexidine gluconate mouthwash for 14 days and were directed to immerse the upper denture in the chlorhexidine solution for 8 hours. The samples were collected by scraping maxillary denture in saline at three intervals, prior to, at the end of 24 hours and after 14 days of treatment, then were inoculated and quantitative estimation of the yeast growth on Sabouraud′s dextrose agar plate was done. It was observed that after a period of 14 days, there was a reduction in the growth of yeast and also improvement in the clinical picture of the oral mucosa

  11. QUANTITATIVE ESTIMATION OF SOIL EROSION IN THE DRĂGAN RIVER WATERSHED WITH THE U.S.L.E. TYPE ROMSEM MODEL

    Directory of Open Access Journals (Sweden)

    Csaba HORVÁTH

    2008-05-01

    Full Text Available Quantitative estimation of soil erosion in the Drăgan river watershed with the U.S.L.E. type Romsem modelSediment delivered from water erosion causes substantial waterway damages and water quality degradation. A number of factors such as drainage area size, basin slope, climate, land use/land cover may affect sediment delivery processes. The goal of this study is to define a computationally effective suitable soil erosion model in the Drăgan river watershed, for future sedimentation studies. Geographic Information System (GIS is used to determine the Universal Soil Loss Equation Model (U.S.L.E. values of the studied water basin. The methods and approaches used in this study are expected to be applicable in future research and to watersheds in other regions.

  12. Identification and Quantification of Uncertainties Related to Using Distributed X-band Radar Estimated Precipitation as input in Urban Drainage Models

    DEFF Research Database (Denmark)

    Pedersen, Lisbeth

    The Local Area Weather Radar (LAWR) is a small scale weather radar providing distributed measurements of rainfall primarily for use as input in hydrological applications. As any other weather radar the LAWR measurement of the rainfall is an indirect measurement since it does not measure the rainf......The Local Area Weather Radar (LAWR) is a small scale weather radar providing distributed measurements of rainfall primarily for use as input in hydrological applications. As any other weather radar the LAWR measurement of the rainfall is an indirect measurement since it does not measure...... are quantified using statistical methods. Furthermore, the present calibration method is reviewed and a new extended calibration method has been developed and tested resulting in improved rainfall estimates. As part of the calibration analysis a number of elements affecting the LAWR performance were identified...... in connection with boundary assignment besides general improved understanding of the benefits and pitfalls in using distributed rainfall data as input to models. In connection with the use of LAWR data in urban drainage context, the potential for using LAWR data for extreme rainfall statistics has been studied...

  13. Ecosystem services - from assessements of estimations to quantitative, validated, high-resolution, continental-scale mapping via airborne LIDAR

    Science.gov (United States)

    Zlinszky, András; Pfeifer, Norbert

    2016-04-01

    service potential" which is the ability of the local ecosystem to deliver various functions (water retention, carbon storage etc.), but can't quantify how much of these are actually used by humans or what the estimated monetary value is. Due to its ability to measure both terrain relief and vegetation structure in high resolution, airborne LIDAR supports direct quantification of the properties of an ecosystem that lead to it delivering a given service (such as biomass, water retention, micro-climate regulation or habitat diversity). In addition, its