WorldWideScience

Sample records for activity quantitative estimates

  1. Principles of Quantitative Estimation of the Chaperone-Like Activity

    2002-01-01

    Molecular chaperones are able to interact with unfolded states of the protein molecule preventing their aggregation and facilitating folding of the polypeptide chain into the native structure. An understanding of the mechanism of protein aggregation is required to estimate the efficiency of action of chaperones in the test-systems based on the suppression of aggregation of protein substrates. The kinetic regimes of aggregation of proteins are discussed. The analysis of the aggregation kinetics of proteins shows that after passing the lag phase, aggregation follows, as a rule, first order kinetics. The quantitative characterization methods of the ability of chaperones to prevent aggregation of protein substrates have been elaborated.

  2. Health Impacts of Increased Physical Activity from Changes in Transportation Infrastructure: Quantitative Estimates for Three Communities.

    Mansfield, Theodore J; MacDonald Gibson, Jacqueline

    2015-01-01

    Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7-30.6), 0.6 (0.3-0.9), and 4.7 (2.1-7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches. PMID:26504832

  3. Acute toxicity estimation by calculation--Tubifex assay and quantitative structure-activity relationships.

    Tichý, Milon; Rucki, Marian; Hanzlíková, Iveta; Roth, Zdenek

    2008-11-01

    A quantitative structure-activity relationship (QSAR) model dependent on log P(n - octanol/water), or log P(OW), was developed with acute toxicity index EC50, the median effective concentration measured as inhibition of movement of the oligochaeta Tubifex tubifex with 3 min exposure, EC50(Tt) (mol/L): log EC50(Tt) = -0.809 (+/-0.035) log P(OW) - 0.495 (+/-0.060), n=82, r=0.931, r2=0.867, residual standard deviation of the estimate 0.315. A learning series for the QSAR model with the oligochaete contained alkanols, alkenols, and alkynols; saturated and unsaturated aldehydes; aniline and chlorinated anilines; phenol and chlorinated phenols; and esters. Three cross-validation procedures proved the robustness and stability of QSAR models with respect to the chemical structure of compounds tested within a series of compounds used in the learning series. Predictive ability was described by q2 .801 (cross-validated r2; predicted variation estimated with cross-validation) in LSO (leave-a structurally series-out) cross-validation. PMID:18522479

  4. Estimation of effluent quality parameters from an activated sludge system using quantitative image analysis

    Mesquita, D. P.; A.L. Amaral; Ferreira, Eugénio C.

    2016-01-01

    Abstract The efficiency of an activated sludge system is generally evaluated by determining several key parameters related to organic matter removal, nitrification and/or denitrification processes. Off-line methods for the determination of these parameters are commonly labor, time consuming, and environmentally harmful. In contrast, quantitative image analysis (QIA) has been recognized as a prompt method for assessing activated sludge contents and structure. In the present study an activated ...

  5. Quantitative comparison of activity calculation methods for the selection of most reliable radionuclide inventory estimation

    It is important to know the accurate radionuclide inventory of radioactive waste for the reliable management. However, estimation of radionuclide concentrations in drummed radioactive waste is difficult and unreliable because of difficulties of direct detection, high cost, and radiation exposure of sampling personnel. In order to overcome these difficulties, scaling factors (SFs) have been used to assess the activities of radionuclides that could not be directly analyzed. A radionuclide assay system has been operated at KORI site since 1996 and consolidated scaling factor method has played a dominant role in determination of radionuclides concentrations. However, some problems are still remained such as uncertainty of estimated scaling factor values, inaccuracy of analyzed sample values, and disparity between the actual and ideal correlation pairs and the others. Therefore, it needs to improve the accuracy of scaling factor values. The scope of this paper is focused on the improvement of accuracy and representativeness of calculated scaling factor values based on statistical techniques. For the selection of reliable activity determination method, the accuracy of estimated SF values for each activity determination method is compared. From the comparison of each activity determination methods, it is recommended that SF determination method should be changed from the arithmetic mean to the geometrical mean for more reliable estimation of radionuclide activity. Arithmetic mean method and geometric mean method are compared based on the data set in KORI system

  6. A quantitative structure-activity approach for lipophilicity estimation of antitumor complexes of different metals using microemulsion electrokinetic chromatography.

    Foteeva, Lidia S; Trofimov, Denis A; Kuznetsova, Olga V; Kowol, Christian R; Arion, Vladimir B; Keppler, Bernhard K; Timerbaev, Andrei R

    2011-06-01

    Microemulsion electrokinetic chromatography (MEEKC) offers a valuable tool for the rapid and highly productive determination of lipophilicity for metal-based anticancer agents. In this investigation, the MEEKC technique was applied for estimation of n-octanol-water partition coefficient (logP(oct)) of a series of antiproliferative complexes of gallium(III) and iron(III) with (4)N-substituted α-N-heterocyclic thiosemicarbazones. Analysis of relationships between the experimental logP(oct) and the retention factors of compounds showed their satisfactory consistency in the case of single metal sets, as well as for both metals. Since none of available calculation programs allows for evaluating the contribution of central metal ion into logP(oct) (i.e. ΔlogP(oct)) of complexes of different metals, this parameter was measured experimentally, by the standard 'shake-flask' method. Extension of the logP(oct) programs by adding ΔlogP(oct) data resulted in good lipophilicity predictions for the complexes of gallium(III) and iron(III) included in one regression set. Comparison of metal-thiosemicarbazonates under examination in terms of logP(oct) vs. antiproliferative activities (i.e. 50% inhibitory concentration in cancer cells) provided evidence that their cytotoxic potency is associated with the ability to cross the lipid bilayer of the cell-membrane via passive diffusion. PMID:21382684

  7. Quantitative estimation of phenolic and flavonoid content and antioxidant activity of various extracts of different parts of Plumbago zeylanica Linn

    Sharma Ira

    2014-06-01

    Full Text Available Plumbagozeylanica (Chitrak is used as medicinal plant in India. The root of the plant and its constituents are credited with potential therapeutic properties including anti-atherogenic,cardiotonic, hepatoprotective and neuroprotective properties. In recent times, interest hasfocused on phytochemicals as new sources of natural antioxidants. Therefore in the present study methanolic crude extracts of stem, leaves, roots of Plumbagozeylanica were screened for their antioxidant property, phenolic content and flavonoid content. Free radical scavenging activity (antioxidant activity was evaluated using 1, 1-diphenyl-2-picryl- hydrazyl (DPPH and was measured as decolorizing activity followed by the trapping of the unpaired electron of DPPH. The plant roots extract reveled significant antioxidant activity as compared to standard flavonoid (quercetin. IC50 values were found to be 72.3μg/ml , 32.433μg/ml and 24.6μg/ml in stem, leaf and root extracts respectively for their antioxidant activity by DPPH assay. The phytochemical investigation showed presence of flavonoids, and phenolics. The total phenolic and total flavonoid content was found to be the maximum in leaf extracts (28.25±0.001 mg of GAE/g and 2.41±0.021 CE/g respectively. A weak linear correlation between total phenolic or flavonoid content and antioxidant activity was found (correlation coefficient, R2 =0.9989 and R2 = 0.9559, respectively. The findings indicated promising antioxidant activity of crude extracts of the plant and needs further exploration for their effective use in both modern and traditional system of medicines.

  8. Thermal diffusivity estimation with quantitative pulsed phase thermography

    Ospina-Borras, J. E.; Florez-Ospina, Juan F.; Benitez-Restrepo, H. D.; Maldague, X.

    2015-05-01

    Quantitative Pulsed Phase Thermography (PPT) has been only used to estimate defect parameters such as depth and thermal resistance. Here, we propose a thermal quadrupole based method that extends quantitative pulsed phase thermography. This approach estimates thermal diffusivity by solving a inversion problem based on non-linear squares estimation. This approach is tested with pulsed thermography data acquired from a composite sample. We compare our results with another technique established in time domain. The proposed quantitative analysis with PPT provides estimates of thermal diffusivity close to those obtained with the time domain approach. This estimation requires only the a priori knowledge of sample thickness.

  9. River Forecasting Center Quantitative Precipitation Estimate Archive

    U.S. Geological Survey, Department of the Interior — Radar indicated-rain gage verified and corrected hourly precipitation estimate on a corrected ~4km HRAP grid. This archive contains hourly estimates of...

  10. Mapping quantitative trait Loci using generalized estimating equations.

    Lange, C.; Whittaker, J C

    2001-01-01

    A number of statistical methods are now available to map quantitative trait loci (QTL) relative to markers. However, no existing methodology can simultaneously map QTL for multiple nonnormal traits. In this article we rectify this deficiency by developing a QTL-mapping approach based on generalized estimating equations (GEE). Simulation experiments are used to illustrate the application of the GEE-based approach.

  11. Quantitative estimation of diacetylmorphine by preparative TLC and UV spectroscopy

    A simple and efficient method for the quantitative estimation of di acetylmorphine in narcotic products has been described. Comparative TLC of narcotic specimens with standards showed presence of morphine, monoacetylmorphine, diacetylmorphine papaverine and noscapine, Resolution of the mixtures was achieved by preparative TLC. Bands corresponding to diacetylmorphine scraped, eluted UV absorption of extracts measured and contents quantified. (author)

  12. Correcting for bias in estimation of quantitative trait loci effects

    Ron Micha

    2005-09-01

    Full Text Available Abstract Estimates of quantitative trait loci (QTL effects derived from complete genome scans are biased, if no assumptions are made about the distribution of QTL effects. Bias should be reduced if estimates are derived by maximum likelihood, with the QTL effects sampled from a known distribution. The parameters of the distributions of QTL effects for nine economic traits in dairy cattle were estimated from a daughter design analysis of the Israeli Holstein population including 490 marker-by-sire contrasts. A separate gamma distribution was derived for each trait. Estimates for both the α and β parameters and their SE decreased as a function of heritability. The maximum likelihood estimates derived for the individual QTL effects using the gamma distributions for each trait were regressed relative to the least squares estimates, but the regression factor decreased as a function of the least squares estimate. On simulated data, the mean of least squares estimates for effects with nominal 1% significance was more than twice the simulated values, while the mean of the maximum likelihood estimates was slightly lower than the mean of the simulated values. The coefficient of determination for the maximum likelihood estimates was five-fold the corresponding value for the least squares estimates.

  13. Uncertainty estimations for quantitative in vivo MRI T1 mapping

    Polders, Daniel L.; Leemans, Alexander; Luijten, Peter R.; Hoogduin, Hans

    2012-11-01

    Mapping the longitudinal relaxation time (T1) of brain tissue is of great interest for both clinical research and MRI sequence development. For an unambiguous interpretation of in vivo variations in T1 images, it is important to understand the degree of variability that is associated with the quantitative T1 parameter. This paper presents a general framework for estimating the uncertainty in quantitative T1 mapping by combining a slice-shifted multi-slice inversion recovery EPI technique with the statistical wild-bootstrap approach. Both simulations and experimental analyses were performed to validate this novel approach and to evaluate the estimated T1 uncertainty in several brain regions across four healthy volunteers. By estimating the T1 uncertainty, it is shown that the variation in T1 within anatomic regions for similar tissue types is larger than the uncertainty in the measurement. This indicates that heterogeneity of the inspected tissue and/or partial volume effects can be the main determinants for the observed variability in the estimated T1 values. The proposed approach to estimate T1 and its uncertainty without the need for repeated measurements may also prove to be useful for calculating effect sizes that are deemed significant when comparing group differences.

  14. Study on the performance evaluation of quantitative precipitation estimation and quantitative precipitation forecast

    Yang, H.; Chang, K.; Suk, M.; cha, J.; Choi, Y.

    2011-12-01

    Rainfall estimation and short-term (several hours) quantitative prediction of precipitation based on meteorological radar data is one of the intensely studied topics. The Korea Peninsula has the horizontally narrow land area and complex topography with many of mountains, and so it has the characteristics that the rainfall system changes in many cases. Quantitative precipitation estimation (QPE) and quantitative precipitation forecasts (QPF) are the crucial information for severe weather or water management. We have been conducted the performance evaluation of QPE/QPF of Korea Meteorological Administration (KMA), which is the first step for optimizing QPE/QPF system in South Korea. The real-time adjusted RAR (Radar-AWS-Rainrate) system gives better agreement with the observed rain-rate than that of the fixed Z-R relation, and the additional bias correction of RAR yields the slightly better results. A correlation coefficient of R2 = 0.84 is obtained between the daily accumulated observed and RAR estimated rainfall. The RAR will be available for the hydrological applications such as the water budget. The VSRF (Very Short Range Forecast) shows better performance than the MAPLE (McGill Algorithm for Precipitation Nowcasting by Lagrangian) within 40 minutes, but the MAPLE better than the VSRF after 40 minutes. In case of hourly forecast, MAPLE shows better performance than the VSRF. QPE and QPF are thought to be meaningful for the nowcasting (1~2 hours) except the model forecast. The long-term forecast longer than 3 hours by meteorological model is especially meaningful for such as water management.

  15. Handling uncertainty in quantitative estimates in integrated resource planning

    Tonn, B.E. [Oak Ridge National Lab., TN (United States); Wagner, C.G. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Mathematics

    1995-01-01

    This report addresses uncertainty in Integrated Resource Planning (IRP). IRP is a planning and decisionmaking process employed by utilities, usually at the behest of Public Utility Commissions (PUCs), to develop plans to ensure that utilities have resources necessary to meet consumer demand at reasonable cost. IRP has been used to assist utilities in developing plans that include not only traditional electricity supply options but also demand-side management (DSM) options. Uncertainty is a major issue for IRP. Future values for numerous important variables (e.g., future fuel prices, future electricity demand, stringency of future environmental regulations) cannot ever be known with certainty. Many economically significant decisions are so unique that statistically-based probabilities cannot even be calculated. The entire utility strategic planning process, including IRP, encompasses different types of decisions that are made with different time horizons and at different points in time. Because of fundamental pressures for change in the industry, including competition in generation, gone is the time when utilities could easily predict increases in demand, enjoy long lead times to bring on new capacity, and bank on steady profits. The purpose of this report is to address in detail one aspect of uncertainty in IRP: Dealing with Uncertainty in Quantitative Estimates, such as the future demand for electricity or the cost to produce a mega-watt (MW) of power. A theme which runs throughout the report is that every effort must be made to honestly represent what is known about a variable that can be used to estimate its value, what cannot be known, and what is not known due to operational constraints. Applying this philosophy to the representation of uncertainty in quantitative estimates, it is argued that imprecise probabilities are superior to classical probabilities for IRP.

  16. Quantitative estimation of Nipah virus replication kinetics in vitro

    Hassan Sharifah

    2006-06-01

    Full Text Available Abstract Background Nipah virus is a zoonotic virus isolated from an outbreak in Malaysia in 1998. The virus causes infections in humans, pigs, and several other domestic animals. It has also been isolated from fruit bats. The pathogenesis of Nipah virus infection is still not well described. In the present study, Nipah virus replication kinetics were estimated from infection of African green monkey kidney cells (Vero using the one-step SYBR® Green I-based quantitative real-time reverse transcriptase-polymerase chain reaction (qRT-PCR assay. Results The qRT-PCR had a dynamic range of at least seven orders of magnitude and can detect Nipah virus from as low as one PFU/μL. Following initiation of infection, it was estimated that Nipah virus RNA doubles at every ~40 minutes and attained peak intracellular virus RNA level of ~8.4 log PFU/μL at about 32 hours post-infection (PI. Significant extracellular Nipah virus RNA release occurred only after 8 hours PI and the level peaked at ~7.9 log PFU/μL at 64 hours PI. The estimated rate of Nipah virus RNA released into the cell culture medium was ~0.07 log PFU/μL per hour and less than 10% of the released Nipah virus RNA was infectious. Conclusion The SYBR® Green I-based qRT-PCR assay enabled quantitative assessment of Nipah virus RNA synthesis in Vero cells. A low rate of Nipah virus extracellular RNA release and low infectious virus yield together with extensive syncytial formation during the infection support a cell-to-cell spread mechanism for Nipah virus infection.

  17. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  18. Uncertainty Model For Quantitative Precipitation Estimation Using Weather Radars

    Ernesto Gómez Vargas

    2016-06-01

    Full Text Available This paper introduces an uncertainty model for the quantitatively estimate precipitation using weather radars. The model considers various key aspects associated to radar calibration, attenuation, and the tradeoff between accuracy and radar coverage. An S-band-radar case study is presented to illustrate particular fractional-uncertainty calculations obtained to adjust various typical radar-calibration elements such as antenna, transmitter, receiver, and some other general elements included in the radar equation. This paper is based in “Guide to the expression of Uncertainty in measurement” and the results show that the fractional uncertainty calculated by the model was 40 % for the reflectivity and 30% for the precipitation using the Marshall Palmer Z-R relationship.

  19. Environmental impact of soil remediation activities: evaluation of quantitative and qualitative tools

    Cappuyns, Valérie

    2012-01-01

    When evaluating remediation technologies for contaminated soil and groundwater, the beneficial effect of the remediation, namely cleaner soil and groundwater, are mostly emphasized without consideration of the environmental impact of the remediation activities themselves. Nevertheless, different qualitative, semi-quantitative and quantitative methods to estimate the environmental impact of soil remediation activities are available. Within the framework of contaminated site management, an envi...

  20. Quantitative Model for Estimating Soil Erosion Rates Using 137Cs

    YANGHAO; GHANGQING; 等

    1998-01-01

    A quantitative model was developed to relate the amount of 137Cs loss from the soil profile to the rate of soil erosion,According th mass balance model,the depth distribution pattern of 137Cs in the soil profile ,the radioactive decay of 137Cs,sampling year and the difference of 137Cs fallout amount among years were taken into consideration.By introducing typical depth distribution functions of 137Cs into the model ,detailed equations for the model were got for different soil,The model shows that the rate of soil erosion is mainly controlled by the depth distrbution pattern of 137Cs ,the year of sampling,and the percentage reduction in total 137Cs,The relationship between the rate of soil loss and 137Cs depletion i neither linear nor logarithmic,The depth distribution pattern of 137Cs is a major factor for estimating the rate of soil loss,Soil erosion rate is directly related with the fraction of 137Cs content near the soil surface. The influences of the radioactive decay of 137Cs,sampling year and 137Cs input fraction are not large compared with others.

  1. Novel whole brain segmentation and volume estimation using quantitative MRI

    West, J. [Linkoeping University, Radiation Physics, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); SyntheticMR AB, Linkoeping (Sweden); Warntjes, J.B.M. [Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); SyntheticMR AB, Linkoeping (Sweden); Linkoeping University and Department of Clinical Physiology UHL, County Council of Oestergoetland, Clinical Physiology, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Lundberg, P. [Linkoeping University, Center for Medical Imaging Science and Visualization (CMIV), Linkoeping (Sweden); Linkoeping University and Department of Radiation Physics UHL, County Council of Oestergoetland, Radiation Physics, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden); Linkoeping University and Department of Radiology UHL, County Council of Oestergoetland, Radiology, Department of Medical and Health Sciences, Faculty of Health Sciences, Linkoeping (Sweden)

    2012-05-15

    Brain segmentation and volume estimation of grey matter (GM), white matter (WM) and cerebro-spinal fluid (CSF) are important for many neurological applications. Volumetric changes are observed in multiple sclerosis (MS), Alzheimer's disease and dementia, and in normal aging. A novel method is presented to segment brain tissue based on quantitative magnetic resonance imaging (qMRI) of the longitudinal relaxation rate R{sub 1}, the transverse relaxation rate R{sub 2} and the proton density, PD. Previously reported qMRI values for WM, GM and CSF were used to define tissues and a Bloch simulation performed to investigate R{sub 1}, R{sub 2} and PD for tissue mixtures in the presence of noise. Based on the simulations a lookup grid was constructed to relate tissue partial volume to the R{sub 1}-R{sub 2}-PD space. The method was validated in 10 healthy subjects. MRI data were acquired using six resolutions and three geometries. Repeatability for different resolutions was 3.2% for WM, 3.2% for GM, 1.0% for CSF and 2.2% for total brain volume. Repeatability for different geometries was 8.5% for WM, 9.4% for GM, 2.4% for CSF and 2.4% for total brain volume. We propose a new robust qMRI-based approach which we demonstrate in a patient with MS. (orig.)

  2. Quantitative analysis on tectonic deformation of active rupture zones

    JIANG Zai-sen; NIU An-fu; WANG Min; LI Kai-wu; FANG Ying; ZHANG Xi; ZHANG Xiao-liang

    2005-01-01

    Based on the regional GPS data of high spatial resolution, we present a method of quantitative analysis on the tectonic deformation of active rupture zones in order to predict the location of forthcoming major earthquakes. Firstly we divide the main fault area into certain deformation units, then derive the geometric deformation and relative dislocation parameters of each unit and finally estimate quantitatively the slip and strain rates in each segment of the rupture zone. Furthermore, by comparing the consistency of deformation in all segments of the whole rupture zone, we can determine the possible anomalous segments as well as their properties and amplitudes. In analyzing the eastern boundaries of Sichuan-Yunnan block with the GPS velocity data for the period of 1991~2001, we have discovered that the Mianning-Ningnan-Dongchuan segment on the Zemuhe-Xiaojiang fault zone is relatively locked and the left-lateral shear strain rate here is higher.

  3. Quantitative estimates of the volatility of ambient organic aerosol

    C. D. Cappa

    2010-06-01

    Full Text Available Measurements of the sensitivity of organic aerosol (OA, and its components mass to changes in temperature were recently reported by Huffman et al.~(2009 using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets" are determined using several assumptions as to the enthalpy of vaporization (ΔHvap. We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions; on the order of 50–80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol and lowest for the high (ΔHvap = 150 kJ/mol assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009 has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the

  4. Quantitative estimates of the volatility of ambient organic aerosol

    C. D. Cappa

    2010-01-01

    Full Text Available Measurements of the sensitivity of organic aerosol (OA, and its components mass to changes in temperature were recently reported by Huffman et al. (2009 using a tandem thermodenuder-aerosol mass spectrometer (TD-AMS system in Mexico City and the Los Angeles area. Here, we use these measurements to derive quantitative estimates of aerosol volatility within the framework of absorptive partitioning theory using a kinetic model of aerosol evaporation in the TD. OA volatility distributions (or "basis-sets" are determined using several assumptions as to the enthalpy of vaporization (ΔHvap. We present two definitions of "non-volatile OA," one being a global and one a local definition. Based on these definitions, our analysis indicates that a substantial fraction of the organic aerosol is comprised of non-volatile components that will not evaporate under any atmospheric conditions, on the order of 50–80% when the most realistic ΔHvap assumptions are considered. The sensitivity of the total OA mass to dilution and ambient changes in temperature has been assessed for the various ΔHvap assumptions. The temperature sensitivity is relatively independent of the particular ΔHvap assumptions whereas dilution sensitivity is found to be greatest for the low (ΔHvap = 50 kJ/mol and lowest for the high (ΔHvap = 150 kJ/mol assumptions. This difference arises from the high ΔHvap assumptions yielding volatility distributions with a greater fraction of non-volatile material than the low ΔHvap assumptions. If the observations are fit using a 1 or 2-component model the sensitivity of the OA to dilution is unrealistically high. An empirical method introduced by Faulhaber et al. (2009 has also been used to independently estimate a volatility distribution for the ambient OA and is found to give results consistent with the high and variable ΔHvap assumptions. Our

  5. Quantitative estimation of seafloor features from photographs and their application to nodule mining

    Sharma, R.

    Methods developed for quantitative estimation of seafloor features from seabed photographs and their application for estimation of nodule sizes, coverage, abundance, burial, sediment thickness, extent of rock exposure, density of benthic organisms...

  6. Real Time River Forecasting Center Quantitative Precipitation Estimate

    U.S. Geological Survey, Department of the Interior — Radar indicated-rain gage verified and corrected hourly precipitation estimate on a corrected ~4km HRAP grid. This archive contains hourly estimates of...

  7. Comparison of the scanning linear estimator (SLE) and ROI methods for quantitative SPECT imaging.

    Könik, Arda; Kupinski, Meredith; Pretorius, P Hendrik; King, Michael A; Barrett, Harrison H

    2015-08-21

    In quantitative emission tomography, tumor activity is typically estimated from calculations on a region of interest (ROI) identified in the reconstructed slices. In these calculations, unpredictable bias arising from the null functions of the imaging system affects ROI estimates. The magnitude of this bias depends upon the tumor size and location. In prior work it has been shown that the scanning linear estimator (SLE), which operates on the raw projection data, is an unbiased estimator of activity when the size and location of the tumor are known. In this work, we performed analytic simulation of SPECT imaging with a parallel-hole medium-energy collimator. Distance-dependent system spatial resolution and non-uniform attenuation were included in the imaging simulation. We compared the task of activity estimation by the ROI and SLE methods for a range of tumor sizes (diameter: 1-3 cm) and activities (contrast ratio: 1-10) added to uniform and non-uniform liver backgrounds. Using the correct value for the tumor shape and location is an idealized approximation to how task estimation would occur clinically. Thus we determined how perturbing this idealized prior knowledge impacted the performance of both techniques. To implement the SLE for the non-uniform background, we used a novel iterative algorithm for pre-whitening stationary noise within a compact region. Estimation task performance was compared using the ensemble mean-squared error (EMSE) as the criterion. The SLE method performed substantially better than the ROI method (i.e. EMSE(SLE) was 23-174 times lower) when the background is uniform and tumor location and size are known accurately. The variance of the SLE increased when a non-uniform liver texture was introduced but the EMSE(SLE) continued to be 5-20 times lower than the ROI method. In summary, SLE outperformed ROI under almost all conditions that we tested. PMID:26247228

  8. Transient stochastic downscaling of quantitative precipitation estimates for hydrological applications

    Nogueira, M.; Barros, A. P.

    2015-10-01

    Rainfall fields are heavily thresholded and highly intermittent resulting in large areas of zero values. This deforms their stochastic spatial scale-invariant behavior, introducing scaling breaks and curvature in the spatial scale spectrum. To address this problem, spatial scaling analysis was performed inside continuous rainfall features (CRFs) delineated via cluster analysis. The results show that CRFs from single realizations of hourly rainfall display ubiquitous multifractal behavior that holds over a wide range of scales (from ≈1 km up to 100's km). The results further show that the aggregate scaling behavior of rainfall fields is intrinsically transient with the scaling parameters explicitly dependent on the atmospheric environment. These findings provide a framework for robust stochastic downscaling, bridging the gap between spatial scales of observed and simulated rainfall fields and the high-resolution requirements of hydrometeorological and hydrological studies. Here, a fractal downscaling algorithm adapted to CRFs is presented and applied to generate stochastically downscaled hourly rainfall products from radar derived Stage IV (∼4 km grid resolution) quantitative precipitation estimates (QPE) over the Integrated Precipitation and Hydrology Experiment (IPHEx) domain in the southeast USA. The methodology can produce large ensembles of statistically robust high-resolution fields without additional data or any calibration requirements, conserving the coarse resolution information and generating coherent small-scale variability and field statistics, hence adding value to the original fields. Moreover, it is computationally inexpensive enabling fast production of high-resolution rainfall realizations with latency adequate for forecasting applications. When the transient nature of the scaling behavior is considered, the results show a better ability to reproduce the statistical structure of observed rainfall compared to using fixed scaling parameters

  9. Supporting dune management by quantitative estimation of evapotranspiration

    Samson, R; Provoost, Sam; Willaert, L.; Lemeur, R.

    2005-01-01

    Research was conducted in the nature reserve De Westhoek (B) in order to estimate the hydrological impact of shrub removal in favour of the recolonisation and development of herbaceous vegetation types in the dune slacks. Dune slacks are one of the most rare ecotopes in Europe. Therefore, the evapotranspiration of herbaceous and shrub vegetation types was estimated based on experimentally obtained data and modelling. Analysis of the experimentally obtained stomatal resistance values revealed ...

  10. Bayesian Shrinkage Estimation of Quantitative Trait Loci Parameters

    Wang, Hui; Zhang, Yuan-Ming; Li, Xinmin; Masinde, Godfred L.; Mohan, Subburaman; Baylink, David J.; Xu, Shizhong

    2005-01-01

    Mapping multiple QTL is a typical problem of variable selection in an oversaturated model because the potential number of QTL can be substantially larger than the sample size. Currently, model selection is still the most effective approach to mapping multiple QTL, although further research is needed. An alternative approach to analyzing an oversaturated model is the shrinkage estimation in which all candidate variables are included in the model but their estimated effects are forced to shrink...

  11. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  12. Improved dose–volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose–volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator–detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less

  13. A quantitative framework for estimating water resources in India

    Shankar, D.; Kotamraju, V.; Shetye, S.R.

    over the west coast owing to the inability of the model grid to re solve the steep S a hyadris, compare well with the all - India precipitation 34 estimated for the non - hilly subd i- visions of the India M e teorological Department based on rain...

  14. Quantitative estimation of structure homogeneity of mechanically alloyed dispersion-strengthened composite materials

    A method of quantitative estimation of microstructure homogeneity of mechanical alloyed composite materials is proposed. As an indicator of satisfactory degree of microstructure homogeneity a value of variation coefficient of 10% is accepted.

  15. The APEX Quantitative Proteomics Tool: Generating protein quantitation estimates from LC-MS/MS proteomics results

    Saeed Alexander I

    2008-12-01

    Full Text Available Abstract Background Mass spectrometry (MS based label-free protein quantitation has mainly focused on analysis of ion peak heights and peptide spectral counts. Most analyses of tandem mass spectrometry (MS/MS data begin with an enzymatic digestion of a complex protein mixture to generate smaller peptides that can be separated and identified by an MS/MS instrument. Peptide spectral counting techniques attempt to quantify protein abundance by counting the number of detected tryptic peptides and their corresponding MS spectra. However, spectral counting is confounded by the fact that peptide physicochemical properties severely affect MS detection resulting in each peptide having a different detection probability. Lu et al. (2007 described a modified spectral counting technique, Absolute Protein Expression (APEX, which improves on basic spectral counting methods by including a correction factor for each protein (called Oi value that accounts for variable peptide detection by MS techniques. The technique uses machine learning classification to derive peptide detection probabilities that are used to predict the number of tryptic peptides expected to be detected for one molecule of a particular protein (Oi. This predicted spectral count is compared to the protein's observed MS total spectral count during APEX computation of protein abundances. Results The APEX Quantitative Proteomics Tool, introduced here, is a free open source Java application that supports the APEX protein quantitation technique. The APEX tool uses data from standard tandem mass spectrometry proteomics experiments and provides computational support for APEX protein abundance quantitation through a set of graphical user interfaces that partition thparameter controls for the various processing tasks. The tool also provides a Z-score analysis for identification of significant differential protein expression, a utility to assess APEX classifier performance via cross validation, and a

  16. A Quantitative Assay for Aggrecanase Activity

    Will, Horst; Dettloff, Matthias; Bendzkô, Peter; Sveshnikov, Peter

    2005-01-01

    Aggrecanase activities of ADAMTS (a disintegrin and metalloproteinase with thrombospondin motifs) proteinases were measured with a recombinant aggrecan fragment and two monoclonal antibodies. Recombinant human aggrecan interglobular domain was first incubated in the presence of ADAMTS enzymes. The aggrecan peptide with the N-terminal sequence ARGSVIL released upon hydrolysis was then quantified in an enzyme-linked immunosorbent assay (ELISA) with an anti-neoepitope antibody specific for the N...

  17. Quantitation of radiopharmaceutical distribution for use in dose estimates with positron emission tomography

    Current PET systems provide a means of obtaining quantitative radiopharmaceutical distributions, which can be accurate for tissue volumes on the order of 1 cc. Properly calibrated PET systems can non-invasively measure amounts of positron emitter in all parts of the body, allowing dose estimations from data obtained with human subjects rather than performing estimates from activity distributions from test animals. Since these are usually rodents, species differences can be large enough to make this type of estimation irrelevant. Before testing in man, it can also be cost effective to measure activity distributions in non-human primates with PET since it would unnecessary to kill animals to obtain data. Typical measurements for developing dosimetry for new positron-emitting radiopharmaceuticals would start with a series of rectilinear scans with the PET system as a function of time, initially on non-human primates. If organs are clearly delineated in rectilinear scans of non-human primates, estimates of human doses can probably be made from that data. Because of species differences and small size of organs, tomographic scanning may not provide significant additional information. Studies could then proceed in man. As above, rectilinear scans as a function of time would define cross-sections to be defined by tomography. Details of studies would be dependent upon sophistication of dosimetry calculations. If calculations assume uniform whole organ distribution, rectilinear scans should provide adequate isotope concentrations and effective half-lives. If more detailed calculations are to be attempted, distributions can be localized to volumes on the order of 1 cc with tomography. 11 references, 6 figures

  18. Quantitative Analysis of VIIRS DNB Nightlight Point Source for Light Power Estimation and Stability Monitoring

    Changyong Cao

    2014-12-01

    Full Text Available The high sensitivity and advanced onboard calibration on the Visible Infrared Imaging Radiometer Suite (VIIRS Day/Night Band (DNB enables accurate measurements of low light radiances which leads to enhanced quantitative applications at night. The finer spatial resolution of DNB also allows users to examine social economic activities at urban scales. Given the growing interest in the use of the DNB data, there is a pressing need for better understanding of the calibration stability and absolute accuracy of the DNB at low radiances. The low light calibration accuracy was previously estimated at a moderate 15% using extended sources while the long-term stability has yet to be characterized. There are also several science related questions to be answered, for example, how the Earth’s atmosphere and surface variability contribute to the stability of the DNB measured radiances; how to separate them from instrument calibration stability; whether or not SI (International System of Units traceable active light sources can be designed and installed at selected sites to monitor the calibration stability, radiometric and geolocation accuracy, and point spread functions of the DNB; furthermore, whether or not such active light sources can be used for detecting environmental changes, such as aerosols. This paper explores the quantitative analysis of nightlight point sources, such as those from fishing vessels, bridges, and cities, using fundamental radiometry and radiative transfer, which would be useful for a number of applications including search and rescue in severe weather events, as well as calibration/validation of the DNB. Time series of the bridge light data are used to assess the stability of the light measurements and the calibration of VIIRS DNB. It was found that the light radiant power computed from the VIIRS DNB data matched relatively well with independent assessments based on the in situ light installations, although estimates have to be

  19. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  20. Quantitative analysis of estimated scattering coefficient and phase retardation for ovarian tissue characterization

    Yang, Yi; Wang, Tianheng; Wang, Xiaohong; Sanders, Melinda; Brewer, Molly; Zhu, Quing

    2012-01-01

    In this report, optical scattering coefficient and phase retardation quantitatively estimated from polarization-sensitive OCT (PSOCT) were used for ovarian tissue characterization. A total of 33 ex vivo ovaries (normal: n = 26, malignant: n = 7) obtained from 18 patients were investigated. A specificity of 100% and a sensitivity of 86% were achieved by using estimated scattering coefficient alone; and a specificity of 100% and a sensitivity of 43% were obtained by using phase retardation alon...

  1. Preliminary Study on the Feasibility of Performing Quantitative Precipitation Estimation Using X-band Radar

    Figueras i Ventura, J.; C. Z. van de Beek; H. W. J. Russchenberg; R. Uijlenhoet

    2009-01-01

    IRCTR has built an experimental X-band Doppler po-larimetric weather radar system aimed at obtaining high temporal and spatial resolution measurements of precipitation, with particular interest in light rain and drizzle. In this paper a first analysis of the feasibility of obtaining accurate quantitative precipitation estimation from the radar data performed using a high density network of rain gauges is presented.

  2. Logarithmic quantitation model using serum ferritin to estimate iron overload in secondary haemochromatosis.

    Güngör, T; Rohrbach, E.; Solem, E; Kaltwasser, J P; Kornhuber, B

    1996-01-01

    Nineteen children and adolescents receiving repeated transfusions and subcutaneous desferrioxamine treatment were investigated in an attempt to quantitate iron overload non-invasively. Before patients were started on desferrioxamine individual relationships were correlated for 12 to 36 months between transfused iron, absorbed iron estimated gastrointestinally, and increasing serum ferritin concentrations. Patients with inflammation, increased liver enzymes, or haemolysis were excluded from an...

  3. Quantitative Risk reduction estimation Tool For Control Systems, Suggested Approach and Research Needs

    Miles McQueen; Wayne Boyer; Mark Flynn; Sam Alessi

    2006-03-01

    For the past year we have applied a variety of risk assessment technologies to evaluate the risk to critical infrastructure from cyber attacks on control systems. More recently, we identified the need for a stand alone control system risk reduction estimation tool to provide owners and operators of control systems with a more useable, reliable, and credible method for managing the risks from cyber attack. Risk is defined as the probability of a successful attack times the value of the resulting loss, typically measured in lives and dollars. Qualitative and ad hoc techniques for measuring risk do not provide sufficient support for cost benefit analyses associated with cyber security mitigation actions. To address the need for better quantitative risk reduction models we surveyed previous quantitative risk assessment research; evaluated currently available tools; developed new quantitative techniques [17] [18]; implemented a prototype analysis tool to demonstrate how such a tool might be used; used the prototype to test a variety of underlying risk calculational engines (e.g. attack tree, attack graph); and identified technical and research needs. We concluded that significant gaps still exist and difficult research problems remain for quantitatively assessing the risk to control system components and networks, but that a useable quantitative risk reduction estimation tool is not beyond reach.

  4. Estimating bioerosion rate on fossil corals: a quantitative approach from Oligocene reefs (NW Italy)

    Silvestri, Giulia

    2010-05-01

    Bioerosion of coral reefs, especially when related to the activity of macroborers, is considered to be one of the major processes influencing framework development in present-day reefs. Macroboring communities affecting both living and dead corals are widely distributed also in the fossil record and their role is supposed to be analogously important in determining flourishing vs demise of coral bioconstructions. Nevertheless, many aspects concerning environmental factors controlling the incidence of bioerosion, shifting in composition of macroboring communities and estimation of bioerosion rate in different contexts are still poorly documented and understood. This study presents an attempt to quantify bioerosion rate on reef limestones characteristic of some Oligocene outcrops of the Tertiary Piedmont Basin (NW Italy) and deposited under terrigenous sedimentation within prodelta and delta fan systems. Branching coral rubble-dominated facies have been recognized as prevailing in this context. Depositional patterns, textures, and the generally low incidence of taphonomic features, such as fragmentation and abrasion, suggest relatively quiet waters where coral remains were deposited almost in situ. Thus taphonomic signatures occurring on corals can be reliably used to reconstruct environmental parameters affecting these particular branching coral assemblages during their life and to compare them with those typical of classical clear-water reefs. Bioerosion is sparsely distributed within coral facies and consists of a limited suite of traces, mostly referred to clionid sponges and polychaete and sipunculid worms. The incidence of boring bivalves seems to be generally lower. Together with semi-quantitative analysis of bioerosion rate along vertical logs and horizontal levels, two quantitative methods have been assessed and compared. These consist in the elaboration of high resolution scanned thin sections through software for image analysis (Photoshop CS3) and point

  5. The accuracy of absorbed dose estimates in tumours determined by Quantitative SPECT: A Monte Carlo study

    Background. Dosimetry in radionuclide therapy estimates delivered absorbed doses to tumours and ensures that absorbed dose levels to normal organs are below tolerance levels. One procedure is to determine time-activity curves in volumes-of-interests from which the absorbed dose is estimated using SPECT with appropriate corrections for attenuation, scatter and collimator response. From corrected SPECT images the absorbed energy can be calculated by (a) assuming kinetic energy deposited in the same voxel where particles were emitted, (b) convolve with point-dose kernels or (c) use full Monte Carlo (MC) methods. A question arises which dosimetry method is optimal given the limitations in reconstruction- and quantification procedures. Methods. Dosimetry methods (a) and (c) were evaluated by comparing dose-rate volume histograms (DrVHs) from simulated SPECT of 111In, 177Lu, 131I and Bremsstrahlung from 90Y to match true dose rate images. The study used a voxel-based phantom with different tumours in the liver. SPECT reconstruction was made using an iterative OSEM method and MC dosimetry was performed using a charged-particle EGS4 program that also was used to determined true absorbed dose rate distributions for the same phantom geometry but without camera limitations. Results. The DrVHs obtained from SPECT differed from true DrVH mainly due to limited spatial resolution. MC dosimetry had a marginal effect because the SPECT spatial resolution is in the same order as the energy distribution caused by the electron track ranges. For 131I, full MC dosimetry made a difference due to the additional contribution from high-energy photons. SPECT-based DrVHs differ significantly from true DrVHs unless the tumours are considerable larger than the spatial resolution. Conclusion. It is important to understand limitations in quantitative SPECT images and the reasons for apparent heterogeneities since these have an impact on dose-volume histograms. A MC-based dosimetry calculation from

  6. Detection and parameter estimation for quantitative trait loci using regression models and multiple markers

    Da, Yang; VanRaden, Paul; Schook, Lawrence

    2000-01-01

    International audience A strategy of multi-step minimal conditional regression analysis has been developed to determine the existence of statistical testing and parameter estimation for a quantitative trait locus (QTL) that are unaffected by linked QTLs. The estimation of marker-QTL recombination frequency needs to consider only three cases: 1) the chromosome has only one QTL, 2) one side of the target QTL has one or more QTLs, and 3) either side of the target QTL has one or more QTLs. Ana...

  7. Estimation of Qualitative and Quantitative Parameters of Water in Pendra Block of District Bilaspur

    Mohd Irfan Bakshi; Manish Uphadhyay

    2015-01-01

    The subject matter contains estimation of qualitative as well as quantitative parameters of water of Pendra block which includes temperature, pH, Turbidity, Alkalinity, Hardness, Chlorides, Sulphates, Dissolved Oxygen, Biochemical Oxygen Demand, Chemical Oxygen Demand, Total Solid content, Total Dissolved Solids from different sampling sites. The samples showed variation at different times which might be due to large amount of presence of pollutants present in the water.

  8. The quantitative estimation of the vulnerability of brick and concrete building impacted by debris flow

    Zhang, J.; Guo, Z. X.; Wang, D; Qian, H.

    2015-01-01

    There is little historic data about the vulnerability of the damage elements in debris flow disaster in China. Therefore, it is difficult to estimate the vulnerability of debris flow quantitatively. This paper was devoted to the research of the vulnerability of brick and concrete building impacted by debris flow which widely existed in affected area. Under two assumptions, several prototype walls of brick and concrete were constructed to simulate the damaged...

  9. The quantitative estimation of the vulnerability of brick and concrete wall impacted by an experimental boulder

    Zhang, J.; Guo, Z. X.; Wang, D; Qian, H.

    2016-01-01

    There is little historic data about the vulnerability of damaged elements due to debris flow events in China. Therefore, it is difficult to quantitatively estimate the vulnerable elements suffered by debris flows. This paper is devoted to the research of the vulnerability of brick and concrete walls impacted by debris flows. An experimental boulder (an iron sphere) was applied to be the substitute of debris flow since it can produce similar shape impulse load on elements as ...

  10. The quantitative estimation of the vulnerability of brick and concrete building impacted by debris flow

    Zhang, J.; Guo, Z. X.; Wang, D.; Qian, H.

    2015-08-01

    There is little historic data about the vulnerability of the damage elements in debris flow disaster in China. Therefore, it is difficult to estimate the vulnerability of debris flow quantitatively. This paper was devoted to the research of the vulnerability of brick and concrete building impacted by debris flow which widely existed in affected area. Under two assumptions, several prototype walls of brick and concrete were constructed to simulate the damaged structures in debris flow while the iron spheres were taken as the substitute of debris flow. The failure criterion of brick and concrete building was proposed with referring to the structure standards (brick and concrete) and the damage pattern in debris flow. The quantitatively estimation of vulnerability of brick and concrete building was finally established based on Fuzzy mathematics and the proposed failure criterion. The results show that the maximum impact bending moment is the best fit to be the disaster-causing factor in vulnerability curve and formula. The experiments in this paper is the preliminary research on the vulnerability of the element impacted by debris flow. The method and conclusion will be useful for the quantitative estimation of the vulnerability in debris flow and also can be referred in other types of the vulnerable elements research.

  11. Activity estimation in radioimmunotherapy using magnetic nanoparticles

    Rajabi, Hossein; Johari Daha, Fariba

    2015-01-01

    Objective Estimation of activity accumulated in tumor and organs is very important in predicting the response of radiopharmaceuticals treatment. In this study, we synthesized 177Lutetium (177Lu)-trastuzumab-iron oxide nanoparticles as a double radiopharmaceutical agent for treatment and better estimation of organ activity in a new way by magnetic resonance imaging (MRI). Methods 177Lu-trastuzumab-iron oxide nanoparticles were synthesized and all the quality control tests such as labeling yield, nanoparticle size determination, stability in buffer and blood serum up to 4 d, immunoreactivity and biodistribution in normal mice were determined. In mice bearing breast tumor, liver and tumor activities were calculated with three methods: single photon emission computed tomography (SPECT), MRI and organ extraction, which were compared with each other. Results The good results of quality control tests (labeling yield: 61%±2%, mean nanoparticle hydrodynamic size: 41±15 nm, stability in buffer: 86%±5%, stability in blood serum: 80%±3%, immunoreactivity: 80%±2%) indicated that 177Lu-trastuzumab-iron oxide nanoparticles could be used as a double radiopharmaceutical agent in mice bearing tumor. Results showed that 177Lu-trastuzumab-iron oxide nanoparticles with MRI had the ability to measure organ activities more accurate than SPECT. Conclusions Co-conjugating radiopharmaceutical to MRI contrast agents such as iron oxide nanoparticles may be a good way for better dosimetry in nuclear medicine treatment. PMID:25937783

  12. ESTIMATION OF COMPETITIVE ACTIVITY IN SYNCHRONIZED SWIMMING

    Shul'ga L.M.

    2013-01-01

    Full Text Available Aim – is to develop the approach to technical complexity estimation of free routine composition in synchronized swimming. Were analyzed and considered free routine compositions of the strongest swimmers in European and World Championships during the period under study (2008-2011. In the research took part 32 qualified athletes different ages. Were determined the options of the constructed of free program and location the combination saturation in those programs. Were established complicated elements distribution by the minutes of the free routine composition performance and developed the approach to technical complexity estimation of free routine composition (solo for using in training and competitive activity for qualified athletes in synchronized swimming. The total time of breath-holding makes up 40% of the time of the whole free routine composition.

  13. Unbalance Quantitative Structure Activity Relationship Problem Reduction in Drug Design

    D. Pugazhenthi

    2009-01-01

    Full Text Available Problem statement: Activities of drug molecules can be predicted by Quantitative Structure Activity Relationship (QSAR models, which overcome the disadvantage of high cost and long cycle by employing traditional experimental methods. With the fact that number of drug molecules with positive activity is rather fewer than that with negatives, it is important to predict molecular activities considering such an unbalanced situation. Approach: Asymmetric bagging and feature selection was introduced into the problem and Asymmetric Bagging of Support Vector Machines (AB-SVM was proposed on predicting drug activities to treat unbalanced problem. At the same time, features extracted from structures of drug molecules affected prediction accuracy of QSAR models. Hybrid algorithm named SPRAG was proposed, which applied an embedded feature selection method to remove redundant and irrelevant features for AB-SVM. Results: Numerical experimental results on a data set of molecular activities showed that AB-SVM improved AUC and sensitivity values of molecular activities and SPRAG with feature selection further helps to improve prediction ability. Conclusion: Asymmetric bagging can help to improve prediction accuracy of activities of drug molecules, which could be furthermore improved by performing feature selection to select relevant features from the drug.

  14. Quantitative genetic activity graphical profiles for use in chemical evaluation

    Waters, M.D. [Environmental Protection Agency, Washington, DC (United States); Stack, H.F.; Garrett, N.E.; Jackson, M.A. [Environmental Health Research and Testing, Inc., Research Triangle Park, NC (United States)

    1990-12-31

    A graphic approach, terms a Genetic Activity Profile (GAP), was developed to display a matrix of data on the genetic and related effects of selected chemical agents. The profiles provide a visual overview of the quantitative (doses) and qualitative (test results) data for each chemical. Either the lowest effective dose or highest ineffective dose is recorded for each agent and bioassay. Up to 200 different test systems are represented across the GAP. Bioassay systems are organized according to the phylogeny of the test organisms and the end points of genetic activity. The methodology for producing and evaluating genetic activity profile was developed in collaboration with the International Agency for Research on Cancer (IARC). Data on individual chemicals were compiles by IARC and by the US Environmental Protection Agency (EPA). Data are available on 343 compounds selected from volumes 1-53 of the IARC Monographs and on 115 compounds identified as Superfund Priority Substances. Software to display the GAPs on an IBM-compatible personal computer is available from the authors. Structurally similar compounds frequently display qualitatively and quantitatively similar profiles of genetic activity. Through examination of the patterns of GAPs of pairs and groups of chemicals, it is possible to make more informed decisions regarding the selection of test batteries to be used in evaluation of chemical analogs. GAPs provided useful data for development of weight-of-evidence hazard ranking schemes. Also, some knowledge of the potential genetic activity of complex environmental mixtures may be gained from an assessment of the genetic activity profiles of component chemicals. The fundamental techniques and computer programs devised for the GAP database may be used to develop similar databases in other disciplines. 36 refs., 2 figs.

  15. Parameter Estimation in Active Plate Structures

    Araujo, A. L.; Lopes, H. M. R.; Vaz, M. A. P.;

    2006-01-01

    In this paper two non-destructive methods for elastic and piezoelectric parameter estimation in active plate structures with surface bonded piezoelectric patches are presented. These methods rely on experimental undamped natural frequencies of free vibration. The first solves the inverse problem...... through gradient based optimization techniques, while the second is based on a metamodel of the inverse problem, using artificial neural networks. A numerical higher order finite element laminated plate model is used in both methods and results are compared and discussed through a simulated and an...

  16. The new approach of polarimetric attenuation correction for improving radar quantitative precipitation estimation(QPE)

    Gu, Ji-Young; Suk, Mi-Kyung; Nam, Kyung-Yeub; Ko, Jeong-Seok; Ryzhkov, Alexander

    2016-04-01

    To obtain high-quality radar quantitative precipitation estimation data, reliable radar calibration and efficient attenuation correction are very important. Because microwave radiation at shorter wavelength experiences strong attenuation in precipitation, accounting for this attenuation is the essential work at shorter wavelength radar. In this study, the performance of different attenuation/differential attenuation correction schemes at C band is tested for two strong rain events which occurred in central Oklahoma. And also, a new attenuation correction scheme (combination of self-consistency and hot-spot concept methodology) that separates relative contributions of strong convective cells and the rest of the storm to the path-integrated total and differential attenuation is among the algorithms explored. A quantitative use of weather radar measurement such as rainfall estimation relies on the reliable attenuation correction. We examined the impact of attenuation correction on estimates of rainfall in heavy rain events by using cross-checking with S-band radar measurements which are much less affected by attenuation and compared the storm rain totals obtained from the corrected Z and KDP and rain gages in these cases. This new approach can be utilized at shorter wavelength radars efficiently. Therefore, it is very useful to Weather Radar Center of Korea Meteorological Administration preparing X-band research dual Pol radar network.

  17. Quantitative Retention-Activity Relationship Studies by Liposome Electrokinetic Chromatography to Predict Skin Permeability

    XIAN De-Ling; HUANG Ke-Long; LIU Su-Qin; XIAO Jing-Yi

    2008-01-01

    Liposome electrokinetic chromatography (LEKC) provides a simple and facile approach for drug membrane interactions using liposome as a pseudostationary phase. This study evaluated the potential of LEKC for high-throughput skin permeability profiled as an in vitro technique. A quantitative retention-activity relationship(QRAR) model for the estimation of skin permeability was proposed. For the 16 structurally diverse chemicals, lg k correlated well with permeability values (R2=0.886). The predictive ability of the model was evaluated by cross-validation. The result was compared to traditional quantitative structure-activity relationship, QSAR, models using some molecular descriptors and physicochemical parameters. Interestingly, a single LEKC retention parameter was capable of describing the skin permeability, while three variables in QSAR were needed to achieve a similar correlation (R2=0.704). The QRAR models developed in this paper may be a useful method to screening new chemicals and in the early stage of development and selection of chemicals.

  18. Diagnosis and quantitative estimation of pulmonary congestion or edema by pulmonary CT numbers

    Pulmonary computed tomography (CT) was performed in 25 patients with left heart failure and 10 healthy persons to diagnose pulmonary congestion or edema associated with left heart failure. In an analysis of histogram for pulmonary CT numbers obtained from CT scans, CT numbers indicating pulmonary edema were defined as -650 to -750 H.U. This allowed pulmonary edema to be quantitatively estimated early when abnormal findings were not available on chest X-ray film or pulmonary circulation studies. Histograms for CT numbers could be displayed by colors on CT scans. (Namekawa, K.)

  19. Analytical performance of refractometry in quantitative estimation of isotopic concentration of heavy water in nuclear reactor

    The method of refractometry has been investigated for the quantitative estimation of isotopic concentration of D2O (heavy water) in a simulated water sample. Viability of Refractometry as an excellent analytical technique for rapid and non-invasive determination of D2O concentration in water samples has been demonstrated. Temperature of the samples was precisely controlled to eliminate effect of temperature fluctuation on refractive index measurement. Calibration performance by this technique exhibited reasonable analytical response over a wide range (1-100%) of D2O concentration. (author)

  20. Quantitative Analysis of VIIRS DNB Nightlight Point Source for Light Power Estimation and Stability Monitoring

    Changyong Cao; Yan Bai

    2014-01-01

    The high sensitivity and advanced onboard calibration on the Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) enables accurate measurements of low light radiances which leads to enhanced quantitative applications at night. The finer spatial resolution of DNB also allows users to examine social economic activities at urban scales. Given the growing interest in the use of the DNB data, there is a pressing need for better understanding of the calibration stability and absol...

  1. Quantitative estimation of model parameters for post irradiation recovery of cells

    Kim, Jin Kyu; Roh, Chang Hyun; Ryu, Tae Ho [KAERI, Daejeon (Korea, Republic of); Komarova, Ludmila N.; Patin, Vladislav G. [Medical Radiological Research Center, Russia (Korea, Republic of)

    2012-10-15

    It is well known that cell recovery from radiation induced DNA damage determine the ultimate biological effects produced by ionizing radiation. And thus, the impairment of cell ability to recover from radiation damage would be of great relevance in cancer treatment. It is generally agreed that their efficacy is expressed as a slower recovery rate and a lesser volume of recovery. These effects may be caused by different reasons; the damage of the recovery processes, the increase in the portion of irreversible damage or both of these facts. Therefore, it would be of interest to estimate quantitatively the role of each of those reasons. The purpose of this study is to suggest a mathematical model pertinent to the post irradiation recovery and to make an estimation of the model parameters describing post-irradiation recovery of cells exposed to chemicals and ionizing radiation.

  2. Quantitative Cyber Risk Reduction Estimation Methodology for a Small Scada Control System

    Miles A. McQueen; Wayne F. Boyer; Mark A. Flynn; George A. Beitel

    2006-01-01

    We propose a new methodology for obtaining a quick quantitative measurement of the risk reduction achieved when a control system is modified with the intent to improve cyber security defense against external attackers. The proposed methodology employs a directed graph called a compromise graph, where the nodes represent stages of a potential attack and the edges represent the expected time-to-compromise for differing attacker skill levels. Time-to-compromise is modeled as a function of known vulnerabilities and attacker skill level. The methodology was used to calculate risk reduction estimates for a specific SCADA system and for a specific set of control system security remedial actions. Despite an 86% reduction in the total number of vulnerabilities, the estimated time-to-compromise was increased only by about 3 to 30% depending on target and attacker skill level.

  3. Estimation of financial loss ratio for E-insurance:a quantitative model

    钟元生; 陈德人; 施敏华

    2002-01-01

    In view of the risk of E-commerce and the response of the insurance industry to it, this paper is aimed at one important point of insurance, that is, estimation of financial loss ratio, which is one of the most difficult problems facing the E-insurance industry. This paper proposes a quantitative analyzing model for estimating E-insurance financial loss ratio. The model is based on gross income per enterprise and CSI/FBI computer crime and security survey. The analysis results presented are reasonable and valuable for both insurer and the insured and thus can be accepted by both of them. What we must point out is that according to our assumption, the financial loss ratio varied very little, 0.233% in 1999 and 0.236% in 2000 although there was much variation in the main data of the CSI/FBI survey.

  4. On the use of radar-based quantitative precipitation estimates for precipitation frequency analysis

    Eldardiry, Hisham; Habib, Emad; Zhang, Yu

    2015-12-01

    The high spatio-temporal resolutions of radar-based multi-sensor Quantitative Precipitation Estimates (QPEs) makes them a potential complement to the gauge records for engineering design purposes, such as precipitation frequency analysis. The current study investigates three fundamental issues that arise when radar-based QPE products are used in frequency analysis: (a) Effect of sample size due to the typically short records of radar products; (b) Effect of uncertainties present in radar-rainfall estimation algorithms; and (c) Effect of the frequency estimation approach adopted. The study uses a 13-year dataset of hourly, 4 × 4 km2 radar-based over a domain that covers Louisiana, USA. Data-based investigations, as well as synthetic simulations, are performed to quantify the uncertainties associated with the radar-based derived frequencies, and to gain insight into the relative contributions of short record lengths and those from conditional biases in the radar product. Three regional estimation procedures were tested and the results indicate the sensitivity of the radar frequency estimates to the selection of the estimation approach and the impact on the uncertainties of the derived extreme quantiles. The simulation experiments revealed that the relatively short radar records explained the majority of the uncertainty associated with the radar-based quantiles; however, they did not account for any tangible contribution to the systematic underestimation observed between radar- and gauge-based frequency estimates. This underestimation was mostly attributable to the conditional bias inherent in the radar product. Addressing such key outstanding problems in radar-rainfall products is necessary before they can be fully and reliably used for frequency analysis applications.

  5. Estimation of qualitative and quantitative characteristics interrelation, having an impact on amount of tourists in hospitality industry

    Tatyana P. Levchenko

    2011-01-01

    Full Text Available The article considers methods of estimation of qualitative and quantitative characteristics interrelation, having impact on amount of tourists in hospitality industry, offers the latest technologies of the given indicators calculation.

  6. Quantifying the Extent of Emphysema : Factors Associated with Radiologists' Estimations and Quantitative Indices of Emphysema Severity Using the ECLIPSE Cohort

    Gietema, Hester A.; Mueller, Nestor L.; Fauerbach, Paola V. Nasute; Sharma, Sanjay; Edwards, Lisa D.; Camp, Pat G.; Coxson, Harvey O.

    2011-01-01

    Rationale and Objectives: This study investigated what factors radiologists take into account when estimating emphysema severity and assessed quantitative computed tomography (CT) measurements of low attenuation areas. Materials and Methods: CT scans and spirometry were obtained on 1519 chronic obst

  7. A method to quantitatively estimate rainfall rate above water surface based on different spectra shapes generated by rai

    LIU Zhenwen; YANG Yanming; WEN Hongtao; NIU Fuqiang; XU Xiaomei

    2011-01-01

    A method combined with the nonlinear least-square regression to quantitatively estimate rainfall rate over water surface from the different spectrum shapes generated by rain- fall in some frequency bands was presented. About 2000 min spectrum data generat

  8. Optimisation of information influences on problems of consequences of Chernobyl accident and quantitative criteria for estimation of information actions

    Consequences of Chernobyl NPP accident still very important for Belarus. About 2 million Byelorussians live in the districts polluted by Chernobyl radionuclides. Modern approaches to the decision of after Chernobyl problems in Belarus assume more active use of information and educational actions to grow up a new radiological culture. It will allow to reduce internal doze of radiation without spending a lot of money and other resources. Experience of information work with the population affected by Chernobyl since 1986 till 2004 has shown, that information and educational influences not always reach the final aim - application of received knowledge on radiating safety in practice and changing the style of life. If we take into account limited funds and facilities, we should optimize information work. The optimization can be achieved on the basis of quantitative estimations of information actions effectiveness. It is possible to use two parameters for this quantitative estimations: 1) increase in knowledge of the population and experts on the radiating safety, calculated by new method based on applied theory of the information (Mathematical Theory of Communication) by Claude E. Shannon and 2) reduction of internal doze of radiation, calculated on the basis of measurements on human irradiation counter (HIC) before and after an information or educational influence. (author)

  9. The quantitative estimation of the vulnerability of brick and concrete wall impacted by an experimental boulder

    Zhang, J.; Guo, Z. X.; Wang, D.; Qian, H.

    2016-02-01

    There is little historic data about the vulnerability of damaged elements due to debris flow events in China. Therefore, it is difficult to quantitatively estimate the vulnerable elements suffered by debris flows. This paper is devoted to the research of the vulnerability of brick and concrete walls impacted by debris flows. An experimental boulder (an iron sphere) was applied to be the substitute of debris flow since it can produce similar shape impulse load on elements as debris flow. Several walls made of brick and concrete were constructed in prototype dimensions to physically simulate the damaged structures in debris flows. The maximum impact force was measured, and the damage conditions of the elements (including cracks and displacements) were collected, described and compared. The failure criterion of brick and concrete wall was proposed with reference to the structure characteristics as well as the damage pattern caused by debris flows. The quantitative estimation of the vulnerability of brick and concrete wall was finally established based on fuzzy mathematics and the proposed failure criterion. Momentum, maximum impact force and maximum impact bending moment were compared to be the best candidate for disaster intensity index. The results show that the maximum impact bending moment seems to be most suitable for the disaster intensity index in establishing vulnerability curve and formula.

  10. Quantitative estimation of carbonation and chloride penetration in reinforced concrete by laser-induced breakdown spectroscopy

    Eto, Shuzo; Matsuo, Toyofumi; Matsumura, Takuro; Fujii, Takashi; Tanaka, Masayoshi Y.

    2014-11-01

    The penetration profile of chlorine in a reinforced concrete (RC) specimen was determined by laser-induced breakdown spectroscopy (LIBS). The concrete core was prepared from RC beams with cracking damage induced by bending load and salt water spraying. LIBS was performed using a specimen that was obtained by splitting the concrete core, and the line scan of laser pulses gave the two-dimensional emission intensity profiles of 100 × 80 mm2 within one hour. The two-dimensional profile of the emission intensity suggests that the presence of the crack had less effect on the emission intensity when the measurement interval was larger than the crack width. The chlorine emission spectrum was measured without using the buffer gas, which is usually used for chlorine measurement, by collinear double-pulse LIBS. The apparent diffusion coefficient, which is one of the most important parameters for chloride penetration in concrete, was estimated using the depth profile of chlorine emission intensity and Fick's law. The carbonation depth was estimated on the basis of the relationship between carbon and calcium emission intensities. When the carbon emission intensity was statistically higher than the calcium emission intensity at the measurement point, we determined that the point was carbonated. The estimation results were consistent with the spraying test results using phenolphthalein solution. These results suggest that the quantitative estimation by LIBS of carbonation depth and chloride penetration can be performed simultaneously.

  11. Quantitative pre-surgical lung function estimation with SPECT/CT

    Full text:Objectives: To develop methodology to predict lobar lung function based on SPECT/CT ventilation and perfusion (V/Q) scanning in candidates for lobectomy for lung cancer. Methods: This combines two development areas from our group: quantitative SPECT based on CT-derived corrections for scattering and attenuation of photons, and SPECT V/Q scanning with lobar segmentation from CT. Eight patients underwent baseline pulmonary function testing (PFT) including spirometry, measure of DLCO and cario-pulmonary exercise testing. A SPECT/CT V/Q scan was acquired at baseline. Using in-house software each lobe was anatomically defined using CT to provide lobar ROIs which could be applied to the SPECT data. From these, individual lobar contribution to overall function was calculated from counts within the lobe and post-operative FEV1, DLCO and VO2 peak were predicted. This was compared with the quantitative planar scan method using 3 rectangular ROIs over each lung. Results: Post-operative FEV1 most closely matched that predicted by the planar quantification method, with SPECT V/Q over-estimating the loss of function by 8% (range - 7 - +23%). However, post-operative DLCO and VO2 peak were both accurately predicted by SPECT V/Q (average error of 0 and 2% respectively) compared with planar. Conclusions: More accurate anatomical definition of lobar anatomy provides better estimates of post-operative loss of function for DLCO and VO2 peak than traditional planar methods. SPECT/CT provides the tools for accurate anatomical defintions of the surgical target as well as being useful in producing quantitative 3D functional images for ventilation and perfusion.

  12. Estimating effects of a single gene and polygenes on quantitative traits from a diallel design.

    Lou, Xiang-Yang; Yang, Mark C K

    2006-01-01

    A genetic model is developed with additive and dominance effects of a single gene and polygenes as well as general and specific reciprocal effects for the progeny from a diallel mating design. The methods of ANOVA, minimum norm quadratic unbiased estimation (MINQUE), restricted maximum likelihood estimation (REML), and maximum likelihood estimation (ML) are suggested for estimating variance components, and the methods of generalized least squares (GLS) and ordinary least squares (OLS) for fixed effects, while best linear unbiased prediction, linear unbiased prediction (LUP), and adjusted unbiased prediction are suggested for analyzing random effects. Monte Carlo simulations were conducted to evaluate the unbiasedness and efficiency of statistical methods involving two diallel designs with commonly used sample sizes, 6 and 8 parents, with no and missing crosses, respectively. Simulation results show that GLS and OLS are almost equally efficient for estimation of fixed effects, while MINQUE (1) and REML are better estimators of the variance components and LUP is most practical method for prediction of random effects. Data from a Drosophila melanogaster experiment (Gilbert 1985a, Theor appl Genet 69:625-629) were used as a working example to demonstrate the statistical analysis. The new methodology is also applicable to screening candidate gene(s) and to other mating designs with multiple parents, such as nested (NC Design I) and factorial (NC Design II) designs. Moreover, this methodology can serve as a guide to develop new methods for detecting indiscernible major genes and mapping quantitative trait loci based on mixture distribution theory. The computer program for the methods suggested in this article is freely available from the authors. PMID:17028974

  13. Application of short-wave infrared (SWIR) spectroscopy in quantitative estimation of clay mineral contents

    Clay minerals are significant constituents of soil which are necessary for life. This paper studied three types of clay minerals, kaolinite, illite, and montmorillonite, for they are not only the most common soil forming materials, but also important indicators of soil expansion and shrinkage potential. These clay minerals showed diagnostic absorption bands resulting from vibrations of hydroxyl groups and structural water molecules in the SWIR wavelength region. The short-wave infrared reflectance spectra of the soil was obtained from a Portable Near Infrared Spectrometer (PNIS, spectrum range: 1300∼2500 nm, interval: 2 nm). Due to the simplicity, quickness, and the non-destructiveness analysis, SWIR spectroscopy has been widely used in geological prospecting, chemical engineering and many other fields. The aim of this study was to use multiple linear regression (MLR) and partial least squares (PLS) regression to establish the optimizing quantitative estimation models of the kaolinite, illite and montmorillonite contents from soil reflectance spectra. Here, the soil reflectance spectra mainly refers to the spectral reflectivity of soil (SRS) corresponding to the absorption-band position (AP) of kaolinite, illite, and montmorillonite representative spectra from USGS spectral library, the SRS corresponding to the AP of soil spectral and soil overall spectrum reflectance values. The optimal estimation models of three kinds of clay mineral contents showed that the retrieval accuracy was satisfactory (Kaolinite content: a Root Mean Square Error of Calibration (RMSEC) of 1.671 with a coefficient of determination (R2) of 0.791; Illite content: a RMSEC of 1.126 with a R2 of 0.616; Montmorillonite content: a RMSEC of 1.814 with a R2 of 0.707). Thus, the reflectance spectra of soil obtained form PNIS could be used for quantitative estimation of kaolinite, illite and montmorillonite contents in soil

  14. A Novel Method of Quantitative Anterior Chamber Depth Estimation Using Temporal Perpendicular Digital Photography

    Zamir, Ehud; Kong, George Y.X.; Kowalski, Tanya; Coote, Michael; Ang, Ghee Soon

    2016-01-01

    Purpose We hypothesize that: (1) Anterior chamber depth (ACD) is correlated with the relative anteroposterior position of the pupillary image, as viewed from the temporal side. (2) Such a correlation may be used as a simple quantitative tool for estimation of ACD. Methods Two hundred sixty-six phakic eyes had lateral digital photographs taken from the temporal side, perpendicular to the visual axis, and underwent optical biometry (Nidek AL scanner). The relative anteroposterior position of the pupillary image was expressed using the ratio between: (1) lateral photographic temporal limbus to pupil distance (“E”) and (2) lateral photographic temporal limbus to cornea distance (“Z”). In the first chronological half of patients (Correlation Series), E:Z ratio (EZR) was correlated with optical biometric ACD. The correlation equation was then used to predict ACD in the second half of patients (Prediction Series) and compared to their biometric ACD for agreement analysis. Results A strong linear correlation was found between EZR and ACD, R = −0.91, R2 = 0.81. Bland-Altman analysis showed good agreement between predicted ACD using this method and the optical biometric ACD. The mean error was −0.013 mm (range −0.377 to 0.336 mm), standard deviation 0.166 mm. The 95% limits of agreement were ±0.33 mm. Conclusions Lateral digital photography and EZR calculation is a novel method to quantitatively estimate ACD, requiring minimal equipment and training. Translational Relevance EZ ratio may be employed in screening for angle closure glaucoma. It may also be helpful in outpatient medical clinic settings, where doctors need to judge the safety of topical or systemic pupil-dilating medications versus their risk of triggering acute angle closure glaucoma. Similarly, non ophthalmologists may use it to estimate the likelihood of acute angle closure glaucoma in emergency presentations. PMID:27540496

  15. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    Tadayyon, Hadi [Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Sadeghi-Naini, Ali; Czarnota, Gregory, E-mail: Gregory.Czarnota@sunnybrook.ca [Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada); Department of Radiation Oncology, Faculty of Medicine, University of Toronto, Toronto, Ontario M5T 1P5 (Canada); Wirtzfeld, Lauren [Department of Physics, Ryerson University, Toronto, Ontario M5B 2K3 (Canada); Wright, Frances C. [Division of Surgical Oncology, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5 (Canada)

    2014-01-15

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  16. Quantitative ultrasound characterization of locally advanced breast cancer by estimation of its scatterer properties

    Purpose: Tumor grading is an important part of breast cancer diagnosis and currently requires biopsy as its standard. Here, the authors investigate quantitative ultrasound parameters in locally advanced breast cancers that can potentially separate tumors from normal breast tissue and differentiate tumor grades. Methods: Ultrasound images and radiofrequency data from 42 locally advanced breast cancer patients were acquired and analyzed. Parameters related to the linear regression of the power spectrum—midband fit, slope, and 0-MHz-intercept—were determined from breast tumors and normal breast tissues. Mean scatterer spacing was estimated from the spectral autocorrelation, and the effective scatterer diameter and effective acoustic concentration were estimated from the Gaussian form factor. Parametric maps of each quantitative ultrasound parameter were constructed from the gated radiofrequency segments in tumor and normal tissue regions of interest. In addition to the mean values of the parametric maps, higher order statistical features, computed from gray-level co-occurrence matrices were also determined and used for characterization. Finally, linear and quadratic discriminant analyses were performed using combinations of quantitative ultrasound parameters to classify breast tissues. Results: Quantitative ultrasound parameters were found to be statistically different between tumor and normal tissue (p < 0.05). The combination of effective acoustic concentration and mean scatterer spacing could separate tumor from normal tissue with 82% accuracy, while the addition of effective scatterer diameter to the combination did not provide significant improvement (83% accuracy). Furthermore, the two advanced parameters, including effective scatterer diameter and mean scatterer spacing, were found to be statistically differentiating among grade I, II, and III tumors (p = 0.014 for scatterer spacing, p = 0.035 for effective scatterer diameter). The separation of the tumor

  17. Estimating the number of integrations in transformed plants by quantitative real-time PCR

    Vaira Anna Maria

    2002-10-01

    Full Text Available Abstract Background When generating transformed plants, a first step in their characterization is to obtain, for each new line, an estimate of how many copies of the transgene have been integrated in the plant genome because this can deeply influence the level of transgene expression and the ease of stabilizing expression in following generations. This task is normally achieved by Southern analysis, a procedure that requires relatively large amounts of plant material and is both costly and labour-intensive. Moreover, in the presence of rearranged copies the estimates are not correct. New approaches to the problem could be of great help for plant biotechnologists. Results By using a quantitative real-time PCR method that requires limited preliminary optimisation steps, we achieved statistically significant estimates of 1, 2 and 3 copies of a transgene in the primary transformants. Furthermore, by estimating the copy number of both the gene of interest and the selectable marker gene, we show that rearrangements of the T-DNA are not the exception, and probably happen more often than usually recognised. Conclusions We have developed a rapid and reliable method to estimate the number of integrated copies following genetic transformation. Unlike other similar procedures, this method is not dependent on identical amplification efficiency between the PCR systems used and does not need preliminary information on a calibrator. Its flexibility makes it appropriate in those situations where an accurate optimisation of all reaction components is impossible or impractical. Finally, the quality of the information produced is higher than what can be obtained by Southern blot analysis.

  18. Using extended genealogy to estimate components of heritability for 23 quantitative and dichotomous traits.

    Noah Zaitlen

    2013-05-01

    Full Text Available Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.

  19. Spectral Feature Analysis for Quantitative Estimation of Cyanobacteria Chlorophyll-A

    Lin, Yi; Ye, Zhanglin; Zhang, Yugan; Yu, Jie

    2016-06-01

    In recent years, lake eutrophication caused a large of Cyanobacteria bloom which not only brought serious ecological disaster but also restricted the sustainable development of regional economy in our country. Chlorophyll-a is a very important environmental factor to monitor water quality, especially for lake eutrophication. Remote sensed technique has been widely utilized in estimating the concentration of chlorophyll-a by different kind of vegetation indices and monitoring its distribution in lakes, rivers or along coastline. For each vegetation index, its quantitative estimation accuracy for different satellite data might change since there might be a discrepancy of spectral resolution and channel center between different satellites. The purpose this paper is to analyze the spectral feature of chlorophyll-a with hyperspectral data (totally 651 bands) and use the result to choose the optimal band combination for different satellites. The analysis method developed here in this study could be useful to recognize and monitor cyanobacteria bloom automatically and accrately. In our experiment, the reflectance (from 350nm to 1000nm) of wild cyanobacteria in different consistency (from 0 to 1362.11ug/L) and the corresponding chlorophyll-a concentration were measured simultaneously. Two kinds of hyperspectral vegetation indices were applied in this study: simple ratio (SR) and narrow band normalized difference vegetation index (NDVI), both of which consists of any two bands in the entire 651 narrow bands. Then multivariate statistical analysis was used to construct the linear, power and exponential models. After analyzing the correlation between chlorophyll-a and single band reflectance, SR, NDVI respetively, the optimal spectral index for quantitative estimation of cyanobacteria chlorophyll-a, as well corresponding central wavelength and band width were extracted. Results show that: Under the condition of water disturbance, SR and NDVI are both suitable for quantitative

  20. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations

  1. The overall impact of testing on medical student learning: quantitative estimation of consequential validity.

    Kreiter, Clarence D; Green, Joseph; Lenoch, Susan; Saiki, Takuya

    2013-10-01

    Given medical education's longstanding emphasis on assessment, it seems prudent to evaluate whether our current research and development focus on testing makes sense. Since any intervention within medical education must ultimately be evaluated based upon its impact on student learning, this report seeks to provide a quantitative accounting of the learning gains attained through educational assessments. To approach this question, we estimate achieved learning within a medical school environment that optimally utilizes educational assessments. We compare this estimate to learning that might be expected in a medical school that employs no educational assessments. Effect sizes are used to estimate testing's total impact on learning by summarizing three effects; the direct effect, the indirect effect, and the selection effect. The literature is far from complete, but the available evidence strongly suggests that each of these effects is large and the net cumulative impact on learning in medical education is over two standard deviations. While additional evidence is required, the current literature shows that testing within medical education makes a strong positive contribution to learning. PMID:22886140

  2. Quantitative phase imaging technologies to assess neuronal activity (Conference Presentation)

    Thouvenin, Olivier; Fink, Mathias; Boccara, Claude

    2016-03-01

    Active neurons tends to have a different dynamical behavior compared to resting ones. Non-exhaustively, vesicular transport towards the synapses is increased, since axonal growth becomes slower. Previous studies also reported small phase variations occurring simultaneously with the action potential. Such changes exhibit times scales ranging from milliseconds to several seconds on spatial scales smaller than the optical diffraction limit. Therefore, QPI systems are of particular interest to measure neuronal activity without labels. Here, we report the development of two new QPI systems that should enable the detection of such activity. Both systems can acquire full field phase images with a sub nanometer sensitivity at a few hundreds of frames per second. The first setup is a synchronous combination of Full Field Optical Coherence Tomography (FF-OCT) and Fluorescence wide field imaging. The latter modality enables the measurement of neurons electrical activity using calcium indicators. In cultures, FF-OCT exhibits similar features to Digital Holographic Microscopy (DHM), except from complex computational reconstruction. However, FF-OCT is of particular interest in order to measure phase variations in tissues. The second setup is based on a Quantitative Differential Interference Contrast setup mounted in an epi-illumination configuration with a spectrally incoherent illumination. Such a common path interferometer exhibits a very good mechanical stability, and thus enables the measurement of phase images during hours. Additionally, such setup can not only measure a height change, but also an optical index change for both polarization. Hence, one can measure simultaneously a phase change and a birefringence change.

  3. A novel HPTLC method for quantitative estimation of biomarkers in polyherbal formulation

    Zeeshan Ahmed Sheikh; Sadia Shakeel; Somia Gul; Aqib Zahoor; Saleha Suleman Khan; Faisal Haider Zaidi; Khan Usmanghani

    2015-01-01

    Objective:To explore the quantitative estimation of biomarkers gallic acid and berberine in polyherbal formulation Entoban syrup. Methods: High performance thin layer chromatography was performed to evaluate the presence of gallic acid and berberine employing toluene:ethyl acetate:formic acid:methanol 12:9:4:0.5 (v/v/v/v) and ethanol: water: formic acid 90:9:1 (v/v/v), as a mobile phase respectively. Results:The Rf values (0.58) for gallic acid and (0.76) for berberine in both sample and reference standard were found comparable under UV light at 273 nm and 366 nm respectively. The high performance thin layer chromatography method developed for quantization was simple, accurate and specific. Conclusions: The present standardization provides specific and accurate tool to develop qualifications for identity, transparency and reproducibility of biomarkers in Entoban syrup.

  4. Quantitative estimation of naproxen in tablets using ibuprofen sodium as hydrotropic agent

    Maheshwari R

    2009-01-01

    Full Text Available In the present investigation 0.5 M ibuprofen sodium solution has been used as hydrotropic solubilizing agent for naproxen, a poorly water-soluble drug and in it there was more than 350 fold enhancement in the solubility of naproxen as compared to the solubility in distilled water. Therefore, this hydrotropic solution was employed to extract out the drug from its tablet dosage form for quantitative estimation by titrimetry. The naproxen has been successfully analyzed in tablets. The results of analysis obtained by proposed method compared well with those by corresponding British pharmacopoeial method involving the use of methanol. The proposed method was also validated by recovery studies. Presence of ibuprofen sodium and common excipients did not interfere in analysis. Proposed method is new, simple, economic, safe, rapid, accurate, reproducible and environment-friendly.

  5. Parameter estimation using the genetic algorithm and its impact on quantitative precipitation forecast

    Y. H. Lee

    2006-12-01

    Full Text Available In this study, optimal parameter estimations are performed for both physical and computational parameters in a mesoscale meteorological model, and their impacts on the quantitative precipitation forecasting (QPF are assessed for a heavy rainfall case occurred at the Korean Peninsula in June 2005. Experiments are carried out using the PSU/NCAR MM5 model and the genetic algorithm (GA for two parameters: the reduction rate of the convective available potential energy in the Kain-Fritsch (KF scheme for cumulus parameterization, and the Asselin filter parameter for numerical stability. The fitness function is defined based on a QPF skill score. It turns out that each optimized parameter significantly improves the QPF skill. Such improvement is maximized when the two optimized parameters are used simultaneously. Our results indicate that optimizations of computational parameters as well as physical parameters and their adequate applications are essential in improving model performance.

  6. Antioxidant and Quantitative Estimation of Phenolic and Flavonoids of Three Halophytic Plants Growing in Libya

    Hamdoon A

    2013-09-01

    Full Text Available Halophytic plants are more susceptible for oxidative stress and damage due to high contents of salt and minerals inside these plants. Therefore, self defence against this oxidative stress appeared in the high phenolics particularly, flavonoids content are abundant in these plants. Mesembryanthemum crystallinum, Limoniastrum guyonianum and Anabasis articulate are three of halophytic plants growing in Mediterranean coast of Libya and most of North African countries, were taken as example for estimating the phenolic and flavonoids contents as well as antioxidant evaluation in order to understanding the effect of habitat of these plant imitation on the by-products production. Our present work suggested that, there are high relations between the qualitative and quantitative constituent of these halophytic plants which growing near to each other in the same environment.

  7. A quantitative method for estimation of volume changes in arachnoid foveae with age.

    Duray, Stephen M; Martel, Stacie S

    2006-03-01

    Age-related changes of arachnoid foveae have been described, but objective, quantitative analyses are lacking. A new quantitative method is presented for estimation of change in total volume of arachnoid foveae with age. The pilot sample consisted of nine skulls from the Palmer Anatomy Laboratory. Arachnoid foveae were filled with sand, which was extracted using a vacuum pump. Mass was determined with an analytical balance and converted to volume. A reliability analysis was performed using intraclass correlation coefficients. The method was found to be highly reliable (intraobserver ICC = 0.9935, interobserver ICC = 0.9878). The relationship between total volume and age was then examined in a sample of 63 males of accurately known age from the Hamann-Todd collection. Linear regression analysis revealed no statistically significant relationship between total volume and age, or foveae frequency and age (alpha = 0.05). Development of arachnoid foveae may be influenced by health factors, which could limit its usefulness in aging. PMID:16566755

  8. Robust quantitative parameter estimation by advanced CMP measurements for vadose zone hydrological studies

    Koyama, C.; Wang, H.; Khuut, T.; Kawai, T.; Sato, M.

    2015-12-01

    Soil moisture plays a crucial role in the understanding of processes in the vadose zone hydrology. In the last two decades ground penetrating radar (GPR) has been widely discussed has nondestructive measurement technique for soil moisture data. Especially the common mid-point (CMP) technique, which has been used in both seismic and GPR surveys to investigate the vertical velocity profiles, has a very high potential for quantitaive obervsations from the root zone to the ground water aquifer. However, the use is still rather limited today and algorithms for robust quantitative paramter estimation are lacking. In this study we develop an advanced processing scheme for operational soil moisture reetrieval at various depth. Using improved signal processing, together with a semblance - non-normalized cross-correlation sum combined stacking approach and the Dix formula, the interval velocities for multiple soil layers are obtained from the RMS velocities allowing for more accurate estimation of the permittivity at the reflecting point. Where the presence of a water saturated layer, like a groundwater aquifer, can be easily identified by its RMS velocity due to the high contrast compared to the unsaturated zone. By using a new semi-automated measurement technique the acquisition time for a full CMP gather with 1 cm intervals along a 10 m profile can be reduced significantly to under 2 minutes. The method is tested and validated under laboratory conditions in a sand-pit as well as on agricultural fields and beach sand in the Sendai city area. Comparison between CMP estimates and TDR measurements yield a very good agreement with RMSE of 1.5 Vol.-%. The accuracy of depth estimation is validated with errors smaller than 2%. Finally, we demonstrate application of the method in a test site in semi-arid Mongolia, namely the Orkhon River catchment in Bulgan, using commercial 100 MHz and 500 MHz RAMAC GPR antennas. The results demonstrate the suitability of the proposed method for

  9. Quantitative estimation of carbonation and chloride penetration in reinforced concrete by laser-induced breakdown spectroscopy

    The penetration profile of chlorine in a reinforced concrete (RC) specimen was determined by laser-induced breakdown spectroscopy (LIBS). The concrete core was prepared from RC beams with cracking damage induced by bending load and salt water spraying. LIBS was performed using a specimen that was obtained by splitting the concrete core, and the line scan of laser pulses gave the two-dimensional emission intensity profiles of 100 × 80 mm2 within one hour. The two-dimensional profile of the emission intensity suggests that the presence of the crack had less effect on the emission intensity when the measurement interval was larger than the crack width. The chlorine emission spectrum was measured without using the buffer gas, which is usually used for chlorine measurement, by collinear double-pulse LIBS. The apparent diffusion coefficient, which is one of the most important parameters for chloride penetration in concrete, was estimated using the depth profile of chlorine emission intensity and Fick's law. The carbonation depth was estimated on the basis of the relationship between carbon and calcium emission intensities. When the carbon emission intensity was statistically higher than the calcium emission intensity at the measurement point, we determined that the point was carbonated. The estimation results were consistent with the spraying test results using phenolphthalein solution. These results suggest that the quantitative estimation by LIBS of carbonation depth and chloride penetration can be performed simultaneously. - Highlights: • We estimated the carbonation depth and the apparent diffusion coefficient of chlorine sodium in the reinforced concrete with cracking damage by LIBS. • Two-dimensional profile measurement of the emission intensity in each element was performed to visualize the chloride penetration and the carbonation in the reinforced concrete. • Apparent diffusion coefficient of chlorine and sodium can be estimated using the Fick’s law

  10. Quantitative estimation of carbonation and chloride penetration in reinforced concrete by laser-induced breakdown spectroscopy

    Eto, Shuzo, E-mail: eto@criepi.denken.or.jp [Central Research Institute of Electric Power Industry, 2-6-1 Nagasaka, Yokosuka, Kanagawa 240-0196 (Japan); Matsuo, Toyofumi; Matsumura, Takuro; Fujii, Takashi [Central Research Institute of Electric Power Industry, 2-6-1 Nagasaka, Yokosuka, Kanagawa 240-0196 (Japan); Tanaka, Masayoshi Y. [Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, 2-6-1 Nagasaka, Yokosuka, Kanagawa 240-0196 (Japan)

    2014-11-01

    The penetration profile of chlorine in a reinforced concrete (RC) specimen was determined by laser-induced breakdown spectroscopy (LIBS). The concrete core was prepared from RC beams with cracking damage induced by bending load and salt water spraying. LIBS was performed using a specimen that was obtained by splitting the concrete core, and the line scan of laser pulses gave the two-dimensional emission intensity profiles of 100 × 80 mm{sup 2} within one hour. The two-dimensional profile of the emission intensity suggests that the presence of the crack had less effect on the emission intensity when the measurement interval was larger than the crack width. The chlorine emission spectrum was measured without using the buffer gas, which is usually used for chlorine measurement, by collinear double-pulse LIBS. The apparent diffusion coefficient, which is one of the most important parameters for chloride penetration in concrete, was estimated using the depth profile of chlorine emission intensity and Fick's law. The carbonation depth was estimated on the basis of the relationship between carbon and calcium emission intensities. When the carbon emission intensity was statistically higher than the calcium emission intensity at the measurement point, we determined that the point was carbonated. The estimation results were consistent with the spraying test results using phenolphthalein solution. These results suggest that the quantitative estimation by LIBS of carbonation depth and chloride penetration can be performed simultaneously. - Highlights: • We estimated the carbonation depth and the apparent diffusion coefficient of chlorine sodium in the reinforced concrete with cracking damage by LIBS. • Two-dimensional profile measurement of the emission intensity in each element was performed to visualize the chloride penetration and the carbonation in the reinforced concrete. • Apparent diffusion coefficient of chlorine and sodium can be estimated using the Fick

  11. The quantitative precipitation estimation system for Dallas-Fort Worth (DFW) urban remote sensing network

    Chen, Haonan; Chandrasekar, V.

    2015-12-01

    The Dallas-Fort Worth (DFW) urban radar network consists of a combination of high resolution X band radars and a standard National Weather Service (NWS) Next-Generation Radar (NEXRAD) system operating at S band frequency. High spatiotemporal-resolution quantitative precipitation estimation (QPE) is one of the important applications of such a network. This paper presents a real-time QPE system developed by the Collaborative Adaptive Sensing of the Atmosphere (CASA) Engineering Research Center for the DFW urban region using both the high resolution X band radar network and the NWS S band radar observations. The specific dual-polarization radar rainfall algorithms at different frequencies (i.e., S- and X-band) and the fusion methodology combining observations at different temporal resolution are described. Radar and rain gauge observations from four rainfall events in 2013 that are characterized by different meteorological phenomena are used to compare the rainfall estimation products of the CASA DFW QPE system to conventional radar products from the national radar network provided by NWS. This high-resolution QPE system is used for urban flash flood mitigations when coupled with hydrological models.

  12. Application of quantitative structure-property relationship analysis to estimate the vapor pressure of pesticides.

    Goodarzi, Mohammad; Dos Santos Coelho, Leandro; Honarparvar, Bahareh; Ortiz, Erlinda V; Duchowicz, Pablo R

    2016-06-01

    The application of molecular descriptors in describing Quantitative Structure Property Relationships (QSPR) for the estimation of vapor pressure (VP) of pesticides is of ongoing interest. In this study, QSPR models were developed using multiple linear regression (MLR) methods to predict the vapor pressure values of 162 pesticides. Several feature selection methods, namely the replacement method (RM), genetic algorithms (GA), stepwise regression (SR) and forward selection (FS), were used to select the most relevant molecular descriptors from a pool of variables. The optimum subset of molecular descriptors was used to build a QSPR model to estimate the vapor pressures of the selected pesticides. The Replacement Method improved the predictive ability of vapor pressures and was more reliable for the feature selection of these selected pesticides. The results provided satisfactory MLR models that had a satisfactory predictive ability, and will be important for predicting vapor pressure values for compounds with unknown values. This study may open new opportunities for designing and developing new pesticide. PMID:26890190

  13. Performance of refractometry in quantitative estimation of isotopic concentration of heavy water in nuclear reactor

    Highlights: ► Rapid analysis of heavy water samples, with precise temperature control. ► Entire composition range covered. ► Both variations in mole and wt.% of D2O in the heavy water sample studied. ► Standard error of calibration and prediction were estimated. - Abstract: The method of refractometry has been investigated for the quantitative estimation of isotopic concentration of heavy water (D2O) in a simulated water sample. Feasibility of refractometry as an excellent analytical technique for rapid and non-invasive determination of D2O concentration in water samples has been amply demonstrated. Temperature of the samples has been precisely controlled to eliminate the effect of temperature fluctuation on refractive index measurement. The method is found to exhibit a reasonable analytical response to its calibration performance over the purity range of 0–100% D2O. An accuracy of below ±1% in the measurement of isotopic purity of heavy water for the entire range could be achieved

  14. Mereology of Quantitative Structure-Activity Relationships Models

    Guillermo Restrepo

    2015-12-01

    Full Text Available In continuing with the research program initiated by Llored and Harré of exploring the part/whole (mereological discourses of chemistry, we analyse Quantitative Structure-Activity Relationships (QSAR studies, which are widespread approaches for modeling substances’ properties. The study is carried out by analyzing a particular QSAR model, and it is found that different mereologies are needed: from those regarding bulk substances as wholes and molecular entities as parts and to mereologies where the wholes are molecules whose parts are atoms, structured subsets of atoms, nuclei and electronic densities. We suggest a relationship between successful QSAR models and a deep understanding of the mereologies used and the ways they are intertwined. We note that QSAR modelers prefer the mereology of substance-molecule and then discuss how that is related to simplicity and computational capacity. Historical questions are opened, e.g. how the mereologies of substances have changed over time? and why they are mostly oriented toward organic chemistry?

  15. Review of Quantitative Structure - Activity Relationships for Acute Mammalian Toxicity

    Iglika Lessigiarska

    2006-12-01

    Full Text Available This paper reviews Quantitative Structure-Activity Relationship (QSAR models for acute mammalian toxicity published in the last decade. A number of QSAR models based on cytotoxicity data from mammalian cell lines are also included because of their possible use as a surrogate system for predicting acute toxicity to mammals. On the basis of the review, the following conclusions can be made: i a relatively small number of models for in vivo toxicity are published in the literature. This is due to the nature of the endpoint - acute systemic toxicity is usually related to whole body phenomena and therefore is very complex. The complexity of the mechanisms involved leads to difficulties in the QSAR modelling; ii most QSAR models identify hydrophobicity as a parameter of high importance for the modelled toxicity. In addition, many models indicate the role of the electronic and steric effects; iii most of the literature-based models are restricted to single chemical classes. Models based on more heterogeneous data sets are those incorporated in expert systems. In general, the QSAR models for mammalian toxicity identified in this review are considered useful for investigating the mechanisms of toxicity of defined chemical classes. However, for predictive purposes in the regulatory assessment of chemicals most of the models require additional information to satisfy internationally agreed validation principles. In addition, the development of new models covering larger chemical domains would be useful for the regulatory assessment of chemicals.

  16. Improved quantitative visualization of hypervelocity flow through wavefront estimation based on shadow casting of sinusoidal gratings.

    Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan

    2016-08-01

    A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts. PMID:27505389

  17. Quantitative modeling of the ionospheric response to geomagnetic activity

    T. J. Fuller-Rowell

    Full Text Available A physical model of the coupled thermosphere and ionosphere has been used to determine the accuracy of model predictions of the ionospheric response to geomagnetic activity, and assess our understanding of the physical processes. The physical model is driven by empirical descriptions of the high-latitude electric field and auroral precipitation, as measures of the strength of the magnetospheric sources of energy and momentum to the upper atmosphere. Both sources are keyed to the time-dependent TIROS/NOAA auroral power index. The output of the model is the departure of the ionospheric F region from the normal climatological mean. A 50-day interval towards the end of 1997 has been simulated with the model for two cases. The first simulation uses only the electric fields and auroral forcing from the empirical models, and the second has an additional source of random electric field variability. In both cases, output from the physical model is compared with F-region data from ionosonde stations. Quantitative model/data comparisons have been performed to move beyond the conventional "visual" scientific assessment, in order to determine the value of the predictions for operational use. For this study, the ionosphere at two ionosonde stations has been studied in depth, one each from the northern and southern mid-latitudes. The model clearly captures the seasonal dependence in the ionospheric response to geomagnetic activity at mid-latitude, reproducing the tendency for decreased ion density in the summer hemisphere and increased densities in winter. In contrast to the "visual" success of the model, the detailed quantitative comparisons, which are necessary for space weather applications, are less impressive. The accuracy, or value, of the model has been quantified by evaluating the daily standard deviation, the root-mean-square error, and the correlation coefficient between the data and model predictions. The modeled quiet-time variability, or standard

  18. Quantitative estimation of brain atrophy and function with PET and MRI two-dimensional projection images

    Saito, Reiko; Uemura, Koji; Uchiyama, Akihiko [Waseda Univ., Tokyo (Japan). School of Science and Engineering; Toyama, Hinako; Ishii, Kenji; Senda, Michio

    2001-05-01

    The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or {sup 18}F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)

  19. Quantitative estimation of brain atrophy and function with PET and MRI two-dimensional projection images

    The purpose of this paper is to estimate the extent of atrophy and the decline in brain function objectively and quantitatively. Two-dimensional (2D) projection images of three-dimensional (3D) transaxial images of positron emission tomography (PET) and magnetic resonance imaging (MRI) were made by means of the Mollweide method which keeps the area of the brain surface. A correlation image was generated between 2D projection images of MRI and cerebral blood flow (CBF) or 18F-fluorodeoxyglucose (FDG) PET images and the sulcus was extracted from the correlation image clustered by K-means method. Furthermore, the extent of atrophy was evaluated from the extracted sulcus on 2D-projection MRI and the cerebral cortical function such as blood flow or glucose metabolic rate was assessed in the cortex excluding sulcus on 2D-projection PET image, and then the relationship between the cerebral atrophy and function was evaluated. This method was applied to the two groups, the young and the aged normal subjects, and the relationship between the age and the rate of atrophy or the cerebral blood flow was investigated. This method was also applied to FDG-PET and MRI studies in the normal controls and in patients with corticobasal degeneration. The mean rate of atrophy in the aged group was found to be higher than that in the young. The mean value and the variance of the cerebral blood flow for the young are greater than those of the aged. The sulci were similarly extracted using either CBF or FDG PET images. The purposed method using 2-D projection images of MRI and PET is clinically useful for quantitative assessment of atrophic change and functional disorder of cerebral cortex. (author)

  20. Quantitative assay for TALEN activity at endogenous genomic loci

    Yu Hisano

    2013-02-01

    Artificially designed nucleases such as zinc-finger nucleases (ZFNs and transcription activator-like effector nucleases (TALENs can induce a targeted DNA double-strand break at the specific target genomic locus, leading to the frameshift-mediated gene disruption. However, the assays for their activity on the endogenous genomic loci remain limited. Herein, we describe a versatile modified lacZ assay to detect frameshifts in the nuclease target site. Short fragments of the genome DNA at the target or putative off-target loci were amplified from the genomic DNA of TALEN-treated or control embryos, and were inserted into the lacZα sequence for the conventional blue–white selection. The frequency of the frameshifts in the fragment can be estimated from the numbers of blue and white colonies. Insertions and/or deletions were easily determined by sequencing the plasmid DNAs recovered from the positive colonies. Our technique should offer broad application to the artificial nucleases for genome editing in various types of model organisms.

  1. Improving high-resolution quantitative precipitation estimation via fusion of multiple radar-based precipitation products

    Rafieeinasab, Arezoo; Norouzi, Amir; Seo, Dong-Jun; Nelson, Brian

    2015-12-01

    For monitoring and prediction of water-related hazards in urban areas such as flash flooding, high-resolution hydrologic and hydraulic modeling is necessary. Because of large sensitivity and scale dependence of rainfall-runoff models to errors in quantitative precipitation estimates (QPE), it is very important that the accuracy of QPE be improved in high-resolution hydrologic modeling to the greatest extent possible. With the availability of multiple radar-based precipitation products in many areas, one may now consider fusing them to produce more accurate high-resolution QPE for a wide spectrum of applications. In this work, we formulate and comparatively evaluate four relatively simple procedures for such fusion based on Fisher estimation and its conditional bias-penalized variant: Direct Estimation (DE), Bias Correction (BC), Reduced-Dimension Bias Correction (RBC) and Simple Estimation (SE). They are applied to fuse the Multisensor Precipitation Estimator (MPE) and radar-only Next Generation QPE (Q2) products at the 15-min 1-km resolution (Experiment 1), and the MPE and Collaborative Adaptive Sensing of the Atmosphere (CASA) QPE products at the 15-min 500-m resolution (Experiment 2). The resulting fused estimates are evaluated using the 15-min rain gauge observations from the City of Grand Prairie in the Dallas-Fort Worth Metroplex (DFW) in north Texas. The main criterion used for evaluation is that the fused QPE improves over the ingredient QPEs at their native spatial resolutions, and that, at the higher resolution, the fused QPE improves not only over the ingredient higher-resolution QPE but also over the ingredient lower-resolution QPE trivially disaggregated using the ingredient high-resolution QPE. All four procedures assume that the ingredient QPEs are unbiased, which is not likely to hold true in reality even if real-time bias correction is in operation. To test robustness under more realistic conditions, the fusion procedures were evaluated with and

  2. Improving Satellite Quantitative Precipitation Estimation Using GOES-Retrieved Cloud Optical Depth

    Stenz, Ronald; Dong, Xiquan; Xi, Baike; Feng, Zhe; Kuligowski, Robert J.

    2016-02-01

    To address significant gaps in ground-based radar coverage and rain gauge networks in the U.S., geostationary satellite quantitative precipitation estimates (QPEs) such as the Self-Calibrating Multivariate Precipitation Retrievals (SCaMPR) can be used to fill in both the spatial and temporal gaps of ground-based measurements. Additionally, with the launch of GOES-R, the temporal resolution of satellite QPEs may be comparable to that of Weather Service Radar-1988 Doppler (WSR-88D) volume scans as GOES images will be available every five minutes. However, while satellite QPEs have strengths in spatial coverage and temporal resolution, they face limitations particularly during convective events. Deep Convective Systems (DCSs) have large cloud shields with similar brightness temperatures (BTs) over nearly the entire system, but widely varying precipitation rates beneath these clouds. Geostationary satellite QPEs relying on the indirect relationship between BTs and precipitation rates often suffer from large errors because anvil regions (little/no precipitation) cannot be distinguished from rain-cores (heavy precipitation) using only BTs. However, a combination of BTs and optical depth (τ) has been found to reduce overestimates of precipitation in anvil regions (Stenz et al. 2014). A new rain mask algorithm incorporating both τ and BTs has been developed, and its application to the existing SCaMPR algorithm was evaluated. The performance of the modified SCaMPR was evaluated using traditional skill scores and a more detailed analysis of performance in individual DCS components by utilizing the Feng et al. (2012) classification algorithm. SCaMPR estimates with the new rain mask applied benefited from significantly reduced overestimates of precipitation in anvil regions and overall improvements in skill scores.

  3. Estimation of activity in waste packages

    Nine nuclear facilities in Canada and the United States were surveyed by telephone to determine their current methods for assaying the radionuclide content of packages of solid low-level radioactive wastes. Also, the international literature was surveyed to determine current and proposed methods for estimating the radionuclide content of waste packages. A bibliography of relevant reports and articles has been prepared. Two assay methods are reviewed: the method of assigning a gross Curie content based on an external dose measurement; and, the method of estimating the Curie content of specific radionuclides based upon external dose measurements combined with waste stream characterization or gamma spectrum analysis

  4. Estimation of kinetic and transport parameters by quantitative evaluation of EIS and XPS data

    The relatively large number of adjustable parameters often precludes the unambiguous interpretation of electrochemical impedance spectra in terms of a unique kinetic model. In the present paper, the possibilities offered by a combination between in situ electrochemical impedance spectroscopic data and ex situ surface analytical information to improve the credibility of the estimates of the kinetic and transport parameters are discussed. Two electrode systems in which passive oxide films are formed-stainless steel in simulated pressurised water reactor coolant and tungsten in sulphate-fluoride solutions-are used as representative examples to demonstrate the different approaches taken to analyse the experimental data in terms of the Mixed-Conduction Model. Ways to extract information on the rate-limiting steps of the process of passive film formation, growth and restructuring by quantitative comparison of the model equations to electrochemical impedance and X-ray photoelectron spectroscopic data are described and the significance of the obtained parameters for the kinetics of the overall process of metal and alloy dissolution in the passive state is discussed.

  5. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-01-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1–3.9 ppm or 3–9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990–2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations. PMID:27329411

  6. Estimation of the patient monitor alarm rate for a quantitative analysis of new alarm settings.

    de Waele, Stijn; Nielsen, Larry; Frassica, Joseph

    2014-01-01

    In many critical care units, default patient monitor alarm settings are not fine-tuned to the vital signs of the patient population. As a consequence there are many alarms. A large fraction of the alarms are not clinically actionable, thus contributing to alarm fatigue. Recent attention to this phenomenon has resulted in attempts in many institutions to decrease the overall alarm load of clinicians by altering the trigger thresholds for monitored parameters. Typically, new alarm settings are defined based on clinical knowledge and patient population norms and tried empirically on new patients without quantitative knowledge about the potential impact of these new settings. We introduce alarm regeneration as a method to estimate the alarm rate of new alarm settings using recorded patient monitor data. This method enables evaluation of several alarm setting scenarios prior to using these settings in the clinical setting. An expression for the alarm rate variance is derived for the calculation of statistical confidence intervals on the results. PMID:25571296

  7. QUANTITATIVE ESTIMATION OF SATVA EXTRACTED FROM DIFFERENT STEM SIZES OF GUDUCHI (TINOSPORA CORDIFOLIA (WILLD. MIERS

    Sharma Rohit

    2012-02-01

    Full Text Available Tinospora cordifolia (Willd. Miers known as Guduchi in Sanskrit is an important drug of Ayurvedic system of medicine since ancient times. The plant is useful in wide range of diseases like Jwara (fever, Kamala (jaundice, Prameha (diabetes etc. Guduchi Satva, the starchy material of the stem is well-known single drug formulation of Guduchi and is the potent one. Species of the plant, size of the stem, collection time, maturity or immaturity of plant may affect the percentage of Guduchi Satva. Keeping these points in view, an attempt has been made to estimate quantitative variation in Guduchi Satva by using three different sizes of the stem. The results of this study revealed the yield of Guduchi Satva was more in medium size of the stem (1.6-2.0cm than thin size (1.0-1.5cm and thick size (2.1-2.5cm. These findings can be considered in further Pharmaceutical validation of Guduchi Satva.

  8. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade.

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-01-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1-3.9 ppm or 3-9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990-2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations. PMID:27329411

  9. Quantitative Estimation of the Climatic Effects of Carbon Transferred by International Trade

    Wei, Ting; Dong, Wenjie; Moore, John; Yan, Qing; Song, Yi; Yang, Zhiyong; Yuan, Wenping; Chou, Jieming; Cui, Xuefeng; Yan, Xiaodong; Wei, Zhigang; Guo, Yan; Yang, Shili; Tian, Di; Lin, Pengfei; Yang, Song; Wen, Zhiping; Lin, Hui; Chen, Min; Feng, Guolin; Jiang, Yundi; Zhu, Xian; Chen, Juan; Wei, Xin; Shi, Wen; Zhang, Zhiguo; Dong, Juan; Li, Yexin; Chen, Deliang

    2016-06-01

    Carbon transfer via international trade affects the spatial pattern of global carbon emissions by redistributing emissions related to production of goods and services. It has potential impacts on attribution of the responsibility of various countries for climate change and formulation of carbon-reduction policies. However, the effect of carbon transfer on climate change has not been quantified. Here, we present a quantitative estimate of climatic impacts of carbon transfer based on a simple CO2 Impulse Response Function and three Earth System Models. The results suggest that carbon transfer leads to a migration of CO2 by 0.1–3.9 ppm or 3–9% of the rise in the global atmospheric concentrations from developed countries to developing countries during 1990–2005 and potentially reduces the effectiveness of the Kyoto Protocol by up to 5.3%. However, the induced atmospheric CO2 concentration and climate changes (e.g., in temperature, ocean heat content, and sea-ice) are very small and lie within observed interannual variability. Given continuous growth of transferred carbon emissions and their proportion in global total carbon emissions, the climatic effect of traded carbon is likely to become more significant in the future, highlighting the need to consider carbon transfer in future climate negotiations.

  10. Quantitative estimation of CO2 leakage from geological storage : analytical models, numerical models, and data needs

    Geological storage of carbon dioxide (CO2) is becoming one of the most promising options for carbon mitigation. Because of the large number of existing wells, special consideration is required of geological storage of CO2 in mature sedimentary basins in North America. These wells represent potential leakage pathways for the stored CO2, and therefore, must be analyzed in the context of an overall environmental risk assessment. This paper examined the development of large-scale modeling tools to quantify potential CO2 leakage along existing wells. It presented an overview of the problem, including specific analyses that quantified spatial statistics of well locations in a mature basin. Modeling options and their relationship to uncertainty analysis were also presented. The study focused particularly on new analytical solutions for injection and leakage. It was concluded that new semi-analytical models for injection and leakage provide simple computational tools for quantitative estimation of leakage. Although they are more restrictive than general numerical models, they provide extreme efficiency while capturing the essential features of the flow processes. 20 refs., 1 tab., 6 figs

  11. Quantitative Simulations of MST Visual Receptive Field Properties Using a Template Model of Heading Estimation

    Stone, Leland S.; Perrone, J. A.

    1997-01-01

    We previously developed a template model of primate visual self-motion processing that proposes a specific set of projections from MT-like local motion sensors onto output units to estimate heading and relative depth from optic flow. At the time, we showed that that the model output units have emergent properties similar to those of MSTd neurons, although there was little physiological evidence to test the model more directly. We have now systematically examined the properties of the model using stimulus paradigms used by others in recent single-unit studies of MST: 1) 2-D bell-shaped heading tuning. Most MSTd neurons and model output units show bell-shaped heading tuning. Furthermore, we found that most model output units and the finely-sampled example neuron in the Duffy-Wurtz study are well fit by a 2D gaussian (sigma approx. 35deg, r approx. 0.9). The bandwidth of model and real units can explain why Lappe et al. found apparent sigmoidal tuning using a restricted range of stimuli (+/-40deg). 2) Spiral Tuning and Invariance. Graziano et al. found that many MST neurons appear tuned to a specific combination of rotation and expansion (spiral flow) and that this tuning changes little for approx. 10deg shifts in stimulus placement. Simulations of model output units under the same conditions quantitatively replicate this result. We conclude that a template architecture may underlie MT inputs to MST.

  12. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    Jaiswal, P.; C. J. van. Westen; Jetten, V.

    2011-01-01

    A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepare...

  13. Revised activation estimates for silicon carbide

    Heinisch, H.L. [Pacific Northwest National Lab., Richland, WA (United States); Cheng, E.T.; Mann, F.M.

    1996-10-01

    Recent progress in nuclear data development for fusion energy systems includes a reevaluation of neutron activation cross sections for silicon and aluminum. Activation calculations using the newly compiled Fusion Evaluated Nuclear Data Library result in calculated levels of {sup 26}Al in irradiated silicon that are about an order of magnitude lower than the earlier calculated values. Thus, according to the latest internationally accepted nuclear data, SiC is much more attractive as a low activation material, even in first wall applications.

  14. Revised activation estimates for silicon carbide

    Recent progress in nuclear data development for fusion energy systems includes a reevaluation of neutron activation cross sections for silicon and aluminum. Activation calculations using the newly compiled Fusion Evaluated Nuclear Data Library result in calculated levels of 26Al in irradiated silicon that are about an order of magnitude lower than the earlier calculated values. Thus, according to the latest internationally accepted nuclear data, SiC is much more attractive as a low activation material, even in first wall applications

  15. ESTIMATION OF COMPETITIVE ACTIVITY IN SYNCHRONIZED SWIMMING

    Shul'ga L.M.; Rudkovskaya T.I.

    2013-01-01

    Aim – is to develop the approach to technical complexity estimation of free routine composition in synchronized swimming. Were analyzed and considered free routine compositions of the strongest swimmers in European and World Championships during the period under study (2008-2011). In the research took part 32 qualified athletes different ages. Were determined the options of the constructed of free program and location the combination saturation in those programs. Were established complicated ...

  16. Comparative Application of PLS and PCR Methods to Simultaneous Quantitative Estimation and Simultaneous Dissolution Test of Zidovudine - Lamivudine Tablets.

    Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli

    2015-01-01

    In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs. PMID:26085428

  17. An operational weather radar-based Quantitative Precipitation Estimation and its application in catchment water resources modeling

    He, Xin; Vejen, Flemming; Stisen, Simon;

    2011-01-01

    The Danish Meteorological Institute operates a radar network consisting of five C-band Doppler radars. Quantitative precipitation estimation (QPE) using radar data is performed on a daily basis. Radar QPE is considered to have the potential to signifi cantly improve the spatial representation of ...

  18. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Frank M. You

    2016-04-01

    Full Text Available The type 2 modified augmented design (MAD2 is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html.

  19. Estimation of genetic parameters and their sampling variances for quantitative traits in the type 2 modified augmented design

    Frank M. You; Qijian Song; Gaofeng Jia; Yanzhao Cheng; Scott Duguid; Helen Booker; Sylvie Cloutier

    2016-01-01

    The type 2 modified augmented design (MAD2) is an efficient unreplicated experimental design used for evaluating large numbers of lines in plant breeding and for assessing genetic variation in a population. Statistical methods and data adjustment for soil heterogeneity have been previously described for this design. In the absence of replicated test genotypes in MAD2, their total variance cannot be partitioned into genetic and error components as required to estimate heritability and genetic correlation of quantitative traits, the two conventional genetic parameters used for breeding selection. We propose a method of estimating the error variance of unreplicated genotypes that uses replicated controls, and then of estimating the genetic parameters. Using the Delta method, we also derived formulas for estimating the sampling variances of the genetic parameters. Computer simulations indicated that the proposed method for estimating genetic parameters and their sampling variances was feasible and the reliability of the estimates was positively associated with the level of heritability of the trait. A case study of estimating the genetic parameters of three quantitative traits, iodine value, oil content, and linolenic acid content, in a biparental recombinant inbred line population of flax with 243 individuals, was conducted using our statistical models. A joint analysis of data over multiple years and sites was suggested for genetic parameter estimation. A pipeline module using SAS and Perl was developed to facilitate data analysis and appended to the previously developed MAD data analysis pipeline (http://probes.pw.usda.gov/bioinformatics_ tools/MADPipeline/index.html).

  20. An Improved Virial Estimate of Solar Active Region Energy

    Wheatland, M. S.; Metcalf, T. R.

    2005-01-01

    The MHD virial theorem may be used to estimate the magnetic energy of active regions based on vector magnetic fields measured at the photosphere or chromosphere. However, the virial estimate depends on the measured vector magnetic field being force-free. Departure from force-freeness leads to an unknown systematic error in the virial energy estimate, and an origin dependence of the result. We present a method for estimating the systematic error by assuming that magnetic forces are confined to...

  1. Evaluation of radar-gauge merging methods for quantitative precipitation estimates

    E. Goudenhoofdt

    2008-10-01

    Full Text Available Accurate quantitative precipitation estimates are of crucial importance for hydrological studies and applications. When spatial precipitation fields are required, rain gauge measurements are often combined with weather radar observations. In this paper, we evaluate several radar-gauge merging methods with various degrees of complexity: from mean field bias correction to geostatical merging techniques. The study area is the Walloon region of Belgium, which is mostly located in the Meuse catchment. Observations from a C-band Doppler radar and a dense rain gauge network are used to retrieve daily rainfall accumulations over this area. The relative performance of the different merging methods are assessed through a comparison against daily measurements from an independent gauge network. A 3-year verification is performed using several statistical quality parameters. It appears that the geostatistical merging methods perform best with the mean absolute error decreasing by 40% with respect to the original data. A mean field bias correction still achieves a reduction of 25%. A seasonal analysis shows that the benefit of using radar observations is particularly significant during summer. The effect of the network density on the performance of the methods is also investigated. For this purpose, a simple approach to remove gauges from a network is proposed. The analysis reveals that the sensitivity is relatively high for the geostatistical methods but rather small for the simple methods. The geostatistical methods give the best results for all network densities except for a very low density of 1 gauge per 500 km2 where a range-dependent adjustment complemented with a static local bias correction performs best.

  2. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  3. Quantitative Structure-activity Relationship Study on the Antioxidant Activity of Carotenoids

    SUN Yu-Jing; PANG Jie; YE Xing-Qian; Lü Yuan; LI Jun

    2009-01-01

    Carotenoids are a family of effective active oxygen scavengers, which can reduce the danger of occurrence of chronic diseases such as cardiovascular disease, cataract, cancer, and so on. The quantitative structure-activity relationship (QSAR) equation between carotenoids and antioxidant activity was established by quantum chemistry AMI, molecular mechanism (MM+) and stepwise regression analysis methods, and the model was evaluated by leave-one-out approach. The results showed that the significant molecular descriptors related to the antioxidant activity of carotenoids were the energy difference (EHL) between the lowest unoccupied molecular orbital (LUMO) and the highest occupied molecular orbital (HOMO) and ionization energy (Eiso). The model showed a good predictive ability (Q2 > 0.5).

  4. Quantitative estimation of pulegone in Mentha longifolia growing in Saudi Arabia. Is it safe to use?

    Alam, Prawez; Saleh, Mahmoud Fayez; Abdel-Kader, Maged Saad

    2016-03-01

    Our TLC study of the volatile oil isolated from Mentha longifolia showed a major UV active spot with higher Rf value than menthol. Based on the fact that the components of the oil from same plant differ quantitatively due to environmental conditions, the major spot was isolated using different chromatographic techniques and identified by spectroscopic means as pulegone. The presence of pulegone in M. longifolia, a plant widely used in Saudi Arabia, raised a hot debate due to its known toxicity. The Scientific Committee on Food, Health & Consumer Protection Directorate General, European Commission set a limit for the presence of pulegone in foodstuffs and beverages. In this paper we attempted to determine the exact amount of pulegone in different extracts, volatile oil as well as tea flavoured with M. longifolia (Habak) by densitometric HPTLC validated methods using normal phase (Method I) and reverse phase (Method II) TLC plates. The study indicated that the style of use of Habak in Saudi Arabia resulted in much less amount of pulegone than the allowed limit. PMID:27087088

  5. Quantitative evaluation of tricuspid regurgitation by digital simulation of cardiac time-activity curves

    To estimate tricuspid regurgitation (TR) quantitatively, a curve fitting method by computer has been employed. Transport in the right cardiac chamber after intravenous bolus injection of macro-aggregated albumin labeled with technetium 99sup(m) 99sup(m)Tc-MAA was recorded in anterior view by a gammacamera system. Disturbance of the dilution curves from the left heart can be avoided by using sup(99m)Tc-MAA injection. To know the radioisotope activity during the transport, time-activity curves are recorded for the superior vena cava, right atrium, and right ventricle. Parametric differential equations, obtained from compartmental analysis, interpret these curves mathematically. The rate of regurgitation is determined by comparison, using an interative process, between the original and simuylated curves. The whole process is performed automatically by computer. The calculated regurgitation value correlated well with the value from the analog simulation. The method clearly separated those with TR from those without TR. This digital simulation for estimating parameters using a compartmental model is a feasible tool in detecting and quantifying TR. (orig.)

  6. Alteration of apparent viscosity of irradiated pepper - a tool for semi-quantitative estimation of irradiation dose

    The feasibility of using apparent viscosity (ηa) as a method for detecting the occurrence of previous irradiation of pepper was studied. Apparent viscosity of heat-treated suspensions of white and black pepper, nonirradiated or irradiated with different doses of ionising radiation (γ), was measured under different 'shear rates'. Results of previous research were therefore expanded and their usefulness examined; low shear rate conditions were found to be preferable for the detection and semi-quantitative evaluation of irradiation doses. The experimental methodology for semi-quantitative estimation was developed and its scope and limitations are presented. (orig.)

  7. Validation of an ELISA for the quantitation of lanoteplase, a novel plasminogen activator.

    Stouffer, B; Habte, S; Vachharajani, N; Tay, L

    1999-11-01

    An ELISA was developed and validated for the quantitation of lanoteplase in human citrated plasma. The ELISA employed a monoclonal anti-lanoteplase antibody absorbed onto 96-well microtiter plates to capture lanoteplase in citrated human plasma samples containing PPACK, a protease inhibitor. The captured lanoteplase was detected using a biotinylated rabbit anti-lanoteplase polyclonal antibody. The standard curve range in human plasma for the ELISA was 7-100 ng/ml. Assessment of individual standard curve variability indicated reproducible responses with r2 values of > or = 0.985. The accuracy (% DEV) and precision (%RSD) estimates for the ELISA based on the predicted values from quality control (QC) samples were within 7.3% and 11%, respectively. Cross-reactivity with t-PA was determined to be less than 11% by ELISA. The stability of lanoteplase was established in human citrated PPACK plasma for 24 hours at 4 degrees C, for 2 months at -20 degrees C, for 22 months at -70 degrees C, three weeks at room temperature, and through four freeze/thaw cycles. To quantify lanoteplase plasminogen activator (PA) activity, a commercially available chromogenic activity assay was also validated. This method and its application is described briefly here. The lanoteplase ELISA as well as the commercial activity method were successfully employed to evaluate the pharmacokinetic parameters of lanoteplase in support of clinical Phase II/III studies. PMID:10595857

  8. Quantitative estimation of compliance of human systemic veins by occlusion plethysmography with radionuclide

    Volume-pressure relationship and compliance of human systemic veins were estimated quantitatively and noninvasively using radionuclide. The effect of nitroglycerin (NTG) on these parameters was examined. Plethysmography with radionuclide (RN) was performed using the occlusion method on the forearm in 56 patients with various cardiac diseases after RN angiocardiography with 99mTc-RBC. The RN counts-venous pressure curve was constructed from (1) the changes in radioactivity from region of interest on the forearm that were considered to reflect the changes in the blood volume of the forearm, and (2) the changes in the pressure of the forearm vein (fv) due to venous occlusion. The specific compliance of the forearm veins (Csp.fv; (1/V)·(ΔV/ΔP)) was obtained graphically from this curve at each patient's venous pressure (Pv). Csp.fv was 0.044±0.012 mmHg-1 in class I (mean±SD; n=13), 0.033±0.007 mmHg-1 in class II (n=30), and 0.019±0.007 mmHg-1 in class III (n=13), of the previous NYHA classification of work tolerance. There were significant differences in Csp.fv among the three classes. The systemic venous blood volume (Vsv) was determined by subtracting the central blood volume, measured by RN-angiocardiography, from total blood volume, measured by the indicator dilution method utilizing 131I-human serum albumin. Systemic venous compliance (Csv) was calculated from Csv=Csp.fv·Vsv. The Csv was 127.2±24.8 ml·mmHg-1 (mean±SD) in class I, 101.1±24.1 ml·mmHg-1 in class II and 62.2±28.1 ml·mmHg-1 in class III. There were significant differences in Csv among the three classes. The class I Csv value was calculated to be 127.2±24.8 ml·mmHg-1 and the Csv/body weight was calculated to be 2.3±0.7 ml·mmHg-1·kg-1 of body weight. The administration of NTG increased Csv significantly in all cases. (J.P.N.)

  9. A quantitative framework for estimating risk of collision between marine mammals and boats

    Martin, Julien; Sabatier, Quentin; Gowan, Timothy A.; Giraud, Christophe; Gurarie, Eliezer; Calleson, Scott; Ortega-Ortiz, Joel G.; Deutsch, Charles J.; Rycyk, Athena; Koslovsky, Stacie M.

    2016-01-01

    Speed regulations of watercraft in protected areas are designed to reduce lethal collisions with wildlife but can have economic consequences. We present a quantitative framework for investigating the risk of deadly collisions between boats and wildlife.

  10. Estimating phytoplankton photosynthesis by active fluorescence

    Falkowski, P.G.; Kolber, Z.

    1992-10-01

    Photosynthesis can be described by target theory, At low photon flux densities, photosynthesis is a linear function of irradiance (I), The number of reaction centers (n), their effective absorption capture cross section {sigma}, and a quantum yield {phi}. As photosynthesis becomes increasingly light saturated, an increased fraction of reaction centers close. At light saturation the maximum photosynthetic rate is given as the product of the number of reaction centers (n) and their maximum electron transport rate (I/{tau}). Using active fluorometry it is possible to measure non-destructively and in real time the fraction of open or closed reaction centers under ambient irradiance conditions in situ, as well as {sigma} and {phi} {tau} can be readily, calculated from knowledge of the light saturation parameter, I{sub k} (which can be deduced by in situ by active fluorescence measurements) and {sigma}. We built a pump and probe fluorometer, which is interfaced with a CTD. The instrument measures the fluorescence yield of a weak probe flash preceding (f{sub 0}) and succeeding (f{sub 0}) a saturating pump flash. Profiles of the these fluorescence yields are used to derive the instantaneous rate of gross photosynthesis in natural phytoplankton communities without any incubation. Correlations with short-term simulated in situ radiocarbon measurements are extremely high. The average slope between photosynthesis derived from fluorescence and that measured by radiocarbon is 1.15 and corresponds to the average photosynthetic quotient. The intercept is about 15% of the maximum radiocarbon uptake and corresponds to the average net community respiration. Profiles of photosynthesis and sections showing the variability in its composite parameters reveal a significant effect of nutrient availability on biomass specific rates of photosynthesis in the ocean.

  11. Estimations of internal dosimetry: practical calculations of incorporated activity

    The National Commission of Nuclear Security and Safeguards (CNSNS) carries out periodically measurements of corporal activity to Occupationally Exposed Personnel (POE) to determine that the received doses are in according to that settled down in the General Regulation of Radiological Security. In this work the results of the incorporated activity estimates starting from the results of the measurements that were carried out in the one CNSNS laboratory are presented, with which it should be determine lastly the internal dose. Its were used different methodologies to estimate the incorporated activity: estimate with isolated data, estimate with global data and method of the best estimate, demonstrating this last to be the more appropriate to determine the internal dose. (Author)

  12. Using Active Learning to Teach Concepts and Methods in Quantitative Biology.

    Waldrop, Lindsay D; Adolph, Stephen C; Diniz Behn, Cecilia G; Braley, Emily; Drew, Joshua A; Full, Robert J; Gross, Louis J; Jungck, John A; Kohler, Brynja; Prairie, Jennifer C; Shtylla, Blerta; Miller, Laura A

    2015-11-01

    This article provides a summary of the ideas discussed at the 2015 Annual Meeting of the Society for Integrative and Comparative Biology society-wide symposium on Leading Students and Faculty to Quantitative Biology through Active Learning. It also includes a brief review of the recent advancements in incorporating active learning approaches into quantitative biology classrooms. We begin with an overview of recent literature that shows that active learning can improve students' outcomes in Science, Technology, Engineering and Math Education disciplines. We then discuss how this approach can be particularly useful when teaching topics in quantitative biology. Next, we describe some of the recent initiatives to develop hands-on activities in quantitative biology at both the graduate and the undergraduate levels. Throughout the article we provide resources for educators who wish to integrate active learning and technology into their classrooms. PMID:26269460

  13. Quantitative estimation of the impact of precipitation and human activities on runoff change of the Huangfuchuan River Basin%皇甫川流域降水量和人类活动对径流量变化影响的定量评估

    WANG Suiji; YAN Yunxia; YAN Ming; ZHAO Xiaokun

    2012-01-01

    The runoff of some rivers in the world especially in the arid and semi-arid areas has decreased remarkably with global or regional climate change and enhanced human activities.The runoff decrease in the arid and semi-arid areas of northern China has brought severe problems in livelihoods and ecology.To reveal the variation characteristics,trends of runoff and their influencing factors have been important scientific issues for drainage basin management.The objective of this study was to analyze the variation trends of the runoff and quantitatively assess the contributions of precipitation and human activities to the runoff change in the Huangfuchuan River Basin based on the measured data in 1960-2008.Two inflection points (turning years) of 1979 and 1998 for the accumulative runoff change,and one inflection point of 1979 for the accumulative precipitation change were identified using the methods of accumulative anomaly analysis.The linear relationships between year and accumulative runoff in 1960-1979,1980-1997 and 1998-2008 and between year and accumulative precipitation in 1960-1979 and 1980-2008 were fitted.A new method of slope change ratio of accumulative quantity (SCRAQ) was put forward and used in this study to calculate the contributions of different factors to the runoff change.Taking 1960-1979 as the base period,the contribution rate of the precipitation and human activities to the decreased runoff was 36.43% and 63.57% in 1980-1997,and 16.81% and 83.19% in 1998-2008,respectively.The results will play an important role in the drainage basin management.Moreover,the new method of SCRAQ can be applied in the quantitative evaluation of runoff change and impacts by different factors in the river basin of arid and semi-arid areas.%@@

  14. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article. PMID:27002418

  15. Junction temperature estimation for an advanced active power cycling test

    Choi, Uimin; Blaabjerg, Frede; Jørgensen, S.

    estimation method using on-state VCE for an advanced active power cycling test is proposed. The concept of the advanced power cycling test is explained first. Afterwards the junction temperature estimation method using on-state VCE and current is presented. Further, the method to improve the accuracy of the...

  16. Activities on covariance estimation in Japanese Nuclear Data Committee

    Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)

  17. NEXRAD quantitative precipitation estimates, data acquisition, and processing for the DuPage County, Illinois, streamflow-simulation modeling system

    Ortel, Terry W.; Spies, Ryan R.

    2015-01-01

    Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).

  18. Utilization of quantitative structure-activity relationships (QSARs) in risk assessment: Alkylphenols

    Beck, B.D.; Toole, A.P.; Callahan, B.G.; Siddhanti, S.K. (Gradient Corporation, Cambridge, MA (United States))

    1991-12-01

    Alkylphenols are a class of environmentally pervasive compounds, found both in natural (e.g., crude oils) and in anthropogenic (e.g., wood tar, coal gasification waste) materials. Despite the frequent environmental occurrence of these chemicals, there is a limited toxicity database on alkylphenols. The authors have therefore developed a 'toxicity equivalence approach' for alkylphenols which is based on their ability to inhibit, in a specific manner, the enzyme cyclooxygenase. Enzyme-inhibiting ability for individual alkylphenols can be estimated based on the quantitative structure-activity relationship developed by Dewhirst (1980) and is a function of the free hydroxyl group, electron-donating ring substituents, and hydrophobic aromatic ring substituents. The authors evaluated the toxicological significance of cyclooxygenase inhibition by comparison of the inhibitory capacity of alkylphenols with the inhibitory capacity of acetylsalicylic acid, or aspirin, a compound whose low-level effects are due to cyclooxygenase inhibition. Since nearly complete absorption for alkylphenols and aspirin is predicted, based on estimates of hydrophobicity and fraction of charged molecules at gastrointestinal pHs, risks from alkylphenols can be expressed directly in terms of 'milligram aspirin equivalence,' without correction for absorption differences. They recommend this method for assessing risks of mixtures of alkylphenols, especially for those compounds with no chronic toxicity data.38 references.

  19. 76 FR 27384 - Agency Information Collection Activity (Veteran Suicide Prevention Online Quantitative Surveys...

    2011-05-11

    ... AFFAIRS Agency Information Collection Activity (Veteran Suicide Prevention Online Quantitative Surveys...). Type of Review: New collection. Abstract: VA's top priority is the prevention of Veterans suicide. It... better understand Veterans and their families' awareness of VA's suicide prevention and mental...

  20. Heat and mass flux estimation of modern seafloor hydrothermal activity

    ZHAI Shikui; WANG Xingtao; YU Zenghui

    2006-01-01

    Research on heat and mass flux yielded by modern seafloor hydrothermal activity is very important, because it is involved not only in the base of ocean environment research, but also in the historical evolution of seawater properties. Currently, estimating heat flux is based on the observation data of hydrothermal smokers, low-temperature diffusive flow and mid-ocean ridge mainly. But there are some faults, for example, there is lack of a concurrent conductive item in estimating the heat flux by smokers and the error between the half-space cooling model and the observation data is too large. So, three kinds of methods are applied to re-estimating the heat flux of hydrothermal activity resepectively, corresponding estimation is 97.359 GW by hydrothermal smoker and diffusive flow, 84.895 GW by hydrothermal plume, and 4.11 TW by exponential attenuation method put forward by this paper. Research on mass flux estimation is relatively rare, the main reason for this is insufficient field observation data. Mass fluxes of different elements are calculated using hydrothermal vent fluid data from the TAG hydrothermal area on the Mid-Atlantic Ridge for the first time. Difference of estimations by different methods reflects the researching extent of hydrothermal activity, and systematically in-situ observation will help to estimate the contribution of hydrothermal activity to ocean chemical environment, ocean circulation and global climate precisely.

  1. QSAR DataBank repository: open and linked qualitative and quantitative structure–activity relationship models

    Ruusmann, V; Sild, S; Maran, U

    2015-01-01

    Background Structure–activity relationship models have been used to gain insight into chemical and physical processes in biomedicine, toxicology, biotechnology, etc. for almost a century. They have been recognized as valuable tools in decision support workflows for qualitative and quantitative predictions. The main obstacle preventing broader adoption of quantitative structure–activity relationships [(Q)SARs] is that published models are still relatively difficult to discover, retrieve and re...

  2. Quantitative Structure-Activity Relationships and Docking Studies of Calcitonin Gene-Related Peptide Antagonists

    Jenssen, Håvard; Mehrabian, Mohadeseh; Kyani, Anahita

    2012-01-01

    calcitonin gene-related peptide antagonists was performed using a panel of physicochemical descriptors. The computational studies evaluated different variable selection techniques and demonstrated shuffling stepwise multiple linear regression to be superior over genetic algorithm-multiple linear regression....... The linear quantitative structure-activity relationship model revealed better statistical parameters of cross-validation in comparison with the non-linear support vector regression technique. Implementing only five peptide descriptors into this linear quantitative structure-activity relationship model...

  3. Validation and Estimation of Additive Genetic Variation Associated with DNA Tests for Quantitative Beef Cattle Traits

    The U.S. National Beef Cattle Evaluation Consortium (NBCEC) has been involved in the validation of commercial DNA tests for quantitative beef quality traits since their first appearance on the U.S. market in the early 2000s. The NBCEC Advisory Council initially requested that the NBCEC set up a syst...

  4. Preliminary Study on the Feasibility of Performing Quantitative Precipitation Estimation Using X-band Radar

    Figueras i Ventura, J.; Beek van de, C.Z.; Russchenberg, H.W.J.; Uijlenhoet, R.

    2009-01-01

    IRCTR has built an experimental X-band Doppler po-larimetric weather radar system aimed at obtaining high temporal and spatial resolution measurements of precipitation, with particular interest in light rain and drizzle. In this paper a first analysis of the feasibility of obtaining accurate quantit

  5. QUANTITATIVE ESTIMATION OF DNA ISOLATED FROM VARIOUS PARTS OF ANNONA SQUAMOSA

    Soni Himesh

    2011-12-01

    Full Text Available Plants have been one of the important sources of medicines since the beginning of human civilization. There is a growing demand for plant based medicines, health products, pharmaceuticals, food supplements, cosmetics etc. Annona squamosa Linn is a multipurpose tree with edible fruits & is a source one of the medicinal & industrial products. Annona squamosa Linn is used as an antioxidant, antidiabetics, hepatoprotective, cytotoxicactivity, genetoxicity, antitumor activity, antilice agent. It is related to contain alkaloids, flavonoids, carbohydrates, fixed oils, tannins & phenolic. Genetic variation is essential for long term survival of species and it is a critical feature in conservation. For efficient conservation and management, the genetic composition of the species in different geographic locations needs to be assessed. Plants are attracting more attention among contemporary pharmacy scientists because some human diseases resulting from antibiotic resistance have gained worldwide concern. A number of methods are available and are being developed for the isolation of nucleic acids from plants. The different parts of Annona squamosa were studied for their nucleic acid content by using spectrophotometric analysis. In order to measure DNA content of the Leaves,friuts and stems of Annona squamosa, Spectrophotometry serves various advantages i.e. non-destructive and allows the sample to be recovered for further analysis or manipulation. Spectrophotometry uses the fact that there is a relationship between the absorption of ultraviolet light by DNA/RNA and its concentration in a sample. This article deals with modern approaches to develop a simple, efficient, reliable and cost-effective method for isolation, separation and estimation of total genomic DNA from various parts of the same species.

  6. Commercial Activities in Primary Schools: A Quantitative Study

    Raine, Gary

    2007-01-01

    The commercialisation of schools is a controversial issue, but very little is known about the actual situation in UK schools. The aim of this study was to investigate, with particular reference to health education and health promotion, commercial activities and their regulation in primary schools in the Yorkshire and Humber region of the UK. A…

  7. Improved activity estimation with MC-JOSEM versus TEW-JOSEM in 111In SPECT

    Ouyang, Jinsong; Fakhri, Georges El; Moore, Stephen C.

    2008-01-01

    We have previously developed a fast Monte Carlo (MC)-based joint ordered-subset expectation maximization (JOSEM) iterative reconstruction algorithm, MC-JOSEM. A phantom study was performed to compare quantitative imaging performance of MC-JOSEM with that of a triple-energy-window approach (TEW) in which estimated scatter was also included additively within JOSEM, TEW-JOSEM. We acquired high-count projections of a 5.5 cm3 sphere of 111In at different locations in the water-filled torso phantom; high-count projections were then obtained with 111In only in the liver or only in the soft-tissue background compartment, so that we could generate synthetic projections for spheres surrounded by various activity distributions. MC scatter estimates used by MC-JOSEM were computed once after five iterations of TEW-JOSEM. Images of different combinations of liver∕background and sphere∕background activity concentration ratios were reconstructed by both TEW-JOSEM and MC-JOSEM for 40 iterations. For activity estimation in the sphere, MC-JOSEM always produced better relative bias and relative standard deviation than TEW-JOSEM for each sphere location, iteration number, and activity combination. The average relative bias of activity estimates in the sphere for MC-JOSEM after 40 iterations was −6.9%, versus −15.8% for TEW-JOSEM, while the average relative standard deviation of the sphere activity estimates was 16.1% for MC-JOSEM, versus 27.4% for TEW-JOSEM. Additionally, the average relative bias of activity concentration estimates in the liver and the background for MC-JOSEM after 40 iterations was −3.9%, versus −12.2% for TEW-JOSEM, while the average relative standard deviation of these estimates was 2.5% for MC-JOSEM, versus 3.4% for TEW-JOSEM. MC-JOSEM is a promising approach for quantitative activity estimation in 111In SPECT. PMID:18561679

  8. [Quantitative estimation of area parameters (with representatives of genus rana as a case study)].

    Puzachenko, Iu G; Kuz'min, S L; Sandlerskiĭ, R B

    2011-01-01

    A quantitative method of species "point" area analysis is considered that provides the interpolation of species distribution to the whole territory on the basis of its relationships with climatic and relief variables. It is shown that application of standart statistical interpolation techniques is incorrect. The proposed approach is based on interpolation onto the whole territory the species-specific relations with environmental variables detected in single "points". The basic method for the task proves to be the factor analysis. Within the scope of the study, we have considered the methods for quantitative representation of species relationships with climatic and relief variables. The analysis efficiency is demonstrated by an example of three species of brown frogs: Rana temporaria, R. arvalis u R. amurensis. PMID:22121573

  9. Standard approach on quantitative techniques to be used to estimate food waste levels. Project report FUSIONS

    Møller, Hanne; Hansen, Ole-Jørgen; Svanes, Erik; Hartikainen, Hanna; Silvennoinen, Kirsi; Gustavsson, Jenny; Östergren, Karin; Schneider, Felicitas; Soethoudt, Han; Canali, Massimo; Politano, Alessandro; Gaiani, Silvia; Redlingshofer, Barbara; Moates, Graham; Waldron, Keith

    2014-01-01

    The focus of FUSIONS is on promoting food waste prevention by optimising food use and waste prevention strategies. In order to reduce food waste it is necessary to quantify the waste and find the reasons why it occurs. The subject of this report is quantification of food waste all along the value chain from before the material is called food (primary production and processing) until final consumption (household and food service). This report presents the work in FUSIONS on “Quantitative techn...

  10. Quantitative Estimation of Andrographolide by Reverse Phase-High Liquid Chromatography Method from Andrographis Paniculata Nees.

    Dilip Bhaskar Jadhao

    2012-01-01

    Abstract:Reverse Phase High performance liquid chromatographic method with UV array detection was established for the determination of Andrographolide. The Andrographolide was separated using isocratic solvent system consisting of isopropyl alcohol, formic acid and water (70:10:20 v/v) at flow rate of 1.0 ml/min and the detection wavelength of 223 nm. The method was validated for linearity, precision, accuracy, limit of detection (LOD), and limit of quantitation (LOQ). The linearity of the pr...

  11. Validation and measurement uncertainty estimation in food microbiology: differences between quantitative and qualitative methods

    Vesna Režić Dereani; Marijana Matek Sarić

    2010-01-01

    The aim of this research is to describe quality control procedures, procedures for validation and measurement uncertainty (MU) determination as an important element of quality assurance in food microbiology laboratory for qualitative and quantitative type of analysis. Accreditation is conducted according to the standard ISO 17025:2007. General requirements for the competence of testing and calibration laboratories, which guarantees the compliance with standard operating procedures and the tec...

  12. Hyperspectral estimation of corn fraction of photosynthetically active radiation

    Fraction of absorbed photosynthetically active radiation (FPAR) is one of the important variables in many productivity and biomass estimation models, this analyzed the effect of FPAR estimation with hyperspectral information, which could provide the scientific support on the improvement of FPAR estimation, remote sensing data validation, and the other ecological models. Based on the field experiment of corn, this paper analyzed the correlations between FPAR and spectral reflectance or the differential coefficient, and discussed the mechanism of FPAR estimation, studied corn FPAR estimation with reflectance, first differential coefficient, NDVI and RVI. The reflectance of visible bands showed much better correlations with FPAR than near-infrared bands. The correlation curve between FPAR and differential coefficient varied more frequently and greatly than the curve of FPAR and reflectance. Reflectance and differential coefficient both had good regressions with FPAR of the typical single band, with the maximum R2 of 0.791 and 0.882. In a word, differential coefficient and vegetation index were much effective than reflectance for corn FPAR estimating, and the stepwised regression of multibands differential coefficient showed the best regression with R2 of 0.944. 375 nm purpled band and 950 nm near-infraed band absorbed by water showed prodigious potential for FPAR estimating precision. On the whole, vegetation index and differential coefficient have good relationships with FPAR, and could be used for FAPR estimation. It would be effective of choosing right bands and excavating the hyperspectral data to improve FPAR estimating precision

  13. Toward a Quantitative Estimate of Future Heat Wave Mortality under Global Climate Change

    Peng, Roger D.; Tebaldi, Claudia; McDaniel, Larry; Bobb, Jennifer; Dominici, Francesca; Bell, Michelle D.

    2010-01-01

    Background: Climate change is anticipated to affect human health by changing the distribution of known risk factors. Heat waves have had debilitating effects on human mortality, and global climate models predict an increase in the frequency and severity of heat waves. The extent to which climate change will harm human health through changes in the distribution of heat waves and the sources of uncertainty in estimating these effects have not been studied extensively. Objectives: We estimated t...

  14. A STUDY ON VARIABLE QUANTITATIVE PRECIPITATION ESTIMATION USING DOPPLER RADAR DATA

    JI Chun-xiao; CHEN Lian-shou; XU Xiang-de; ZHAO Fang; WU Meng-chun

    2008-01-01

    With the pros and cons of the traditional optimization and probability pairing methods thoroughly considered, an improved optimal pairing window probability technique is developed using a dynamic relationship between the base reflectivity Z observed by radar and real time precipitation I by rain gauge. Then, the Doppler radar observations of base reflectivity for typhoons Haitang and Matsa in Wenzhou are employed to establish various Z-I relationships, which are subsequently used to estimate hourly precipitation of the two typhoons. Such estimations are calibrated by variational techniques. The results show that there exist significant differences in the Z-I relationships for the typhoons, leading to different typhoon precipitation efficiencies. The typhoon precipitation estimated by applying radar base reflectivity is capable of exhibiting clearly the spiral rain belts and mesoscale cells, and well matches the observed rainfall. Error statistical analyses indicate that the estimated typhoon precipitation is better with variational calibration than the one without. The variational calibration technique is able to maintain the characteristics of the distribution of radar-estimated typhoon precipitation, and to significantly reduce the error of the estimated precipitation in comparison with the observed rainfall.

  15. Global estimation of burned area using MODIS active fire observations

    GIGLIO, L.; G. R. van der Werf; J. T. Randerson; Collatz, G. J.; Kasibhatla, P.

    2006-01-01

    We present a method for estimating monthly burned area globally at 1° spatial resolution using Terra MODIS data and ancillary vegetation cover information. Using regression trees constructed for 14 different global regions, MODIS active fire observations were calibrated to ''true'' burned area estimates derived from 500-m MODIS imagery based on the conventional assumption that burned area is proportional to counts of fire pixels. Unlike earlier methods, we...

  16. A quantitative analysis of contractility in active cytoskeletal protein networks.

    Bendix, Poul M; Koenderink, Gijsje H; Cuvelier, Damien; Dogic, Zvonimir; Koeleman, Bernard N; Brieher, William M; Field, Christine M; Mahadevan, L; Weitz, David A

    2008-04-15

    Cells actively produce contractile forces for a variety of processes including cytokinesis and motility. Contractility is known to rely on myosin II motors which convert chemical energy from ATP hydrolysis into forces on actin filaments. However, the basic physical principles of cell contractility remain poorly understood. We reconstitute contractility in a simplified model system of purified F-actin, muscle myosin II motors, and alpha-actinin cross-linkers. We show that contractility occurs above a threshold motor concentration and within a window of cross-linker concentrations. We also quantify the pore size of the bundled networks and find contractility to occur at a critical distance between the bundles. We propose a simple mechanism of contraction based on myosin filaments pulling neighboring bundles together into an aggregated structure. Observations of this reconstituted system in both bulk and low-dimensional geometries show that the contracting gels pull on and deform their surface with a contractile force of approximately 1 microN, or approximately 100 pN per F-actin bundle. Cytoplasmic extracts contracting in identical environments show a similar behavior and dependence on myosin as the reconstituted system. Our results suggest that cellular contractility can be sensitively regulated by tuning the (local) activity of molecular motors and the cross-linker density and binding affinity. PMID:18192374

  17. ESTIMATION OF ACTIVATED ENERGY OF DESORPTION OF n-HEXANE ON ACTIVATED CARBONS BY TPD TECHNIQUE

    2001-01-01

    In this paper, six kinds of activated carbons such as Ag+-activated carbon, Cu2+-activated carbon, Fe3+- activated carbon, activated carbon, Ba2+- activated carbon and Ca2+-activated carbon were prepared. The model for estimating activated energy of desorption was established. Temperature-programmed desorption (TPD) experiments were conducted to measure the TPD curves of n-hexanol and then estimate the activation energy for desorption of n-hexanol on the activated carbons. Results showed that the activation energy for the desorption of n-hexanol on the Ag+- activated carbon, the Cu2+- activated carbon and the Fe3+- activated carbon were higher than those of n-hexanol on the activated carbon, the Ca2+- activated carbon and the Ba2+- activated carbon.

  18. ESTIMATION OF ACTIVATED ENERGY OF DESORPTION OF n—HEXANE ON ACTIVATED CARBONS BY PTD TECHNIQUE

    LIZhong; WANGHongjuan; 等

    2001-01-01

    In this paper,six kinds of activated carbons such as Ag+-activated carbon,Cu2+activated carbon,Fe3+-activated carbon,activated carbon,Ba2+-activated carbon and Ca2+activated carbon were prepared.The model for estimating activated energy of desorption was established.Temperature-programmed desorption(TPD)experiments were conducted to measure the TPD curves of n-hexanol and then estimate the activation energy for desorption of n-hexanol on the activated carbons.Results showed that the activation energy for the desorption of n-hexanol on the Ag+-activated carbon,the Cu2+-activated carbon and the Fe3+-activated carbon were higher than those of n-hexanol on the activated carbon,the Ca2+-activated carbon and the Ba2+-activated carbon.

  19. Estimation of the abundance of an uncultured soil bacterial strain by a competitive quantitative PCR method.

    Lee, S. Y.; Bollinger, J; Bezdicek, D; Ogram, A

    1996-01-01

    Strain EA25 was identified in a clone library of bacterial 16S rRNA gene sequences that had been amplified from DNA extracted from soil collected in eastern Washington State. EA25 was subsequently shown to be related to members of the genera Planctomyces and Chlamydia and most closely related (93% similarity) to strain MC18, a strain identified in an Australian soil sample (W. Liesack and E. Stackebrandt, J. Bacteriol. 174:5072-5078, 1992). A competitive quantitative PCR method developed by Z...

  20. Real-Time Bidirectional Pyrophosphorolysis-Activated Polymerization for Quantitative Detection of Somatic Mutations

    Song, Najie; Zhong, Xueting; Li, Qingge

    2014-01-01

    Detection of somatic mutations for targeted therapy is increasingly used in clinical settings. However, due to the difficulties of detecting rare mutations in excess of wild-type DNA, current methods often lack high sensitivity, require multiple procedural steps, or fail to be quantitative. We developed real-time bidirectional pyrophosphorolysis-activated polymerization (real-time Bi-PAP) that allows quantitative detection of somatic mutations. We applied the method to quantify seven mutation...

  1. THE QUADRANTS METHOD TO ESTIMATE QUANTITATIVE VARIABLES IN MANAGEMENT PLANS IN THE AMAZON

    Gabriel da Silva Oliveira

    2015-12-01

    Full Text Available This work aimed to evaluate the accuracy in estimates of abundance, basal area and commercial volume per hectare, by the quadrants method applied to an area of 1.000 hectares of rain forest in the Amazon. Samples were simulated by random and systematic process with different sample sizes, ranging from 100 to 200 sampling points. The amounts estimated by the samples were compared with the parametric values recorded in the census. In the analysis we considered as the population all trees with diameter at breast height equal to or greater than 40 cm. The quadrants method did not reach the desired level of accuracy for the variables basal area and commercial volume, overestimating the observed values recorded in the census. However, the accuracy of the estimates of abundance, basal area and commercial volume was satisfactory for applying the method in forest inventories for management plans in the Amazon.

  2. Estimation of genetic parameters and detection of quantitative trait loci for metabolites in Danish Holstein milk

    Buitenhuis, Albert Johannes; Sundekilde, Ulrik; Poulsen, Nina Aagaard;

    2013-01-01

    nucleotide polymorphism (SNP) chip. Based on the SNP data, a genomic relationship matrix was calculated and used as a random factor in a model together with 2 fixed factors (herd and lactation stage) to estimate the heritability and breeding value for individual metabolites in the milk. Heritability was in...

  3. Estimating marginal properties of quantitative real-time PCR data using nonlinear mixed models

    Gerhard, Daniel; Bremer, Melanie; Ritz, Christian

    2014-01-01

    A unified modeling framework based on a set of nonlinear mixed models is proposed for flexible modeling of gene expression in real-time PCR experiments. Focus is on estimating the marginal or population-based derived parameters: cycle thresholds and ΔΔc(t), but retaining the conditional mixed model...

  4. Quantitative estimation of IgA in rats: a comparison of two methods

    Two methods for estimating IgA in body fluids of rats were devised and tested for their accuracy and reliability. Specifically purified monomeric and polymeric IgA was prepared so that results obtained could be properly assessed. The two systems tried were rocket immunoelectrophoresis, carried out after reduction of samples with dithiothreitol and using monomeric IgA as standard, and a radioimmunoassay utilising a double antibody precipitation method and polymeric IgA as standard. Rocket immunoelectrophoresis was found generally unsuitable for IgA estimation: the reduction methods employed were found to be unreliable or unsuitable for some samples, and the estimation of IgA in bile by this method was further complicated by the presence of IgA fragments. The radio-immunoassay system, however, could be used to estimate IgA to a lower limit of 1 μg/ml, and accurate results could be obtained almost irrespective of the molecular weights of the IgA in the sample. (Auth.)

  5. Quantitative estimation of the influence of external vibrations on the measurement error of a coriolis mass-flow meter

    Ridder, de, J.; Hakvoort, W.B.J.; van Dijk; Lötters, J.C.; Boer, de, J.W.; Dimitrovova, Z.; Almeida, De

    2013-01-01

    In this paper the quantitative influence of external vibrations on the measurement value of a Coriolis Mass-Flow Meter for low flows is investigated, with the eventual goal to reduce the influence of vibrations. Model results are compared with experimental results to improve the knowledge on how external vibrations affect the measurement error. A Coriolis Mass-Flow Meter (CMFM) is an active device based on the Coriolis force principle for direct mass-flow measurements, independent of fluid pr...

  6. New Descriptors of Amino Acids and Its Applications to Peptide Quantitative Structure-activity Relationship

    SHU Mao; HUO Dan-Qun; MEI Hua; LIANG Gui-Zhao; ZHANG Mei; LI Zhi-Liang

    2008-01-01

    A new set of descriptors, HSEHPCSV (component score vector of hydrophobic, steric, and electronic properties together with hydrogen bonding contributions), were derived from principal component analyses of 95 physicochemical variables of 20 natural amino acids separately according to different kinds of properties described, namely, hydrophobic, steric, and electronic properties as well as hydrogen bonding contributions. HSEHPCSV scales were then employed to express structures of angiotensin-converting enzyme inhibitors, bitter tasting thresholds and bactericidal 18 peptide, and to construct QSAR models based on partial least square (PLS). The results obtained are as follows: the multiple correlation coefficient (R2cum) of 0.846, 0.917 and 0.993, leave-one-out cross validated Q2cum of 0.835, 0.865 and 0.899, and root-mean-square error for estimated error (RMSEE) of 0.396, 0.187and 0.22, respectively. Satisfactory results showed that, as new amino acid scales, data of HSEHPCSV may be a useful structural expression methodology for the studies on peptide QSAR (quantitative structure-activity relationship) due to many advantages such as plentiful structural information, definite physical and chemical meaning and easy interpretation.

  7. Prediction of Toxicity of Phenols and Anilines to Algae by Quantitative Structure-activity Relationship

    GUANG-HUA LU; CHAO WANG; XIAO-LING GUO

    2008-01-01

    Objective To measure the toxicity of phenol, aniline, and their derivatives to algae and to assess, model and predict the toxicity using quantitative structure-activity relationship (QSAR) method. Methods Oxygen production was used as the response endpoint for assessing the toxic effects of chemicals on algal photosynthesis. The energy of the lowest unoccupied molecular orbital (ELUMO) and the energy of the highest occupied molecular orbital (E) Were obtained from the ChemOffice 2004 program using the quantum chemical method MOPAC, and the frontier orbital energy gap (ΔE) was obtained. Results The compounds exhibited a reasonably wide range of algal toxicity. The most toxic compound was α-naphthol, whereas the least toxic one was aniline. A two-descriptor model was derived from the algal toxicity and structural parameters:logl/EC50=0.268logKow-1.006ΔE+11.769 (n=20,r2=0.946). This model was stable and satisfactory for predicting toxicity. Conclusion Phenol aniline, and their derivatives axe polar narcotics. Their toxicity is greater than estimated by hydrophobicity only, and addition of the frontier orbital energy gap ΔE can significantly improve the prediction of logKow-dependont models.

  8. Search for exoplanets with the radial-velocity technique: quantitative diagnostics of stellar activity

    Desort, Morgan; Galland, Franck; Udry, Stephane; Mayor, Michel

    2007-01-01

    Aims: Stellar activity may complicate the analysis of high-precision radial-velocity spectroscopic data when looking for exoplanets signatures. We aim at quantifying the impact of stellar spots on stars with various spectral types and rotational velocities and comparing the simulations with data obtained with the HARPS spectrograph. Methods: We have developed detailed simulations of stellar spots and estimated their effects on a number of observables commonly used in the analysis of radial-velocity data when looking for extrasolar planets, such as radial-velocity curves, cross-correlation functions, bisector spans and photometric curves. The computed stellar spectra are then analyzed in the same way as when searching for exoplanets. Results: 1) A first grid of simulation results is built for F-K type stars, with different stellar and spot properties. 2) It is shown quantitatively that star spots with typical sizes of 1% can mimic both radial-velocity curves and the bisector behavior of short-period giant plan...

  9. Quantitative structure-activity relationships for the toxicity of nitrobenzenes to Tetrahymena thermophila.

    Xu, Jing-Bo; Jing, Ti-Song; Pauli, W; Berger, S

    2002-01-01

    In this study IGC50 (50% inhibitory growth concentration) values of 26 nitrobenzenes were determined for population growth endpoint of Tetrahymena thermophila. The toxicity order of the observed compounds has been found as follows: dinitro compounds > mono-nitro compounds; dichloronitrobenzenes > monochloronitrobenzenes; and meta-substituted nitrobenzenes > ortho-/para-substituted nitrobenzenes (NT, NPh, NAnis) except for the dinitrobenzenes and nitroanilines (DNB, NAn). Quantitative structure activity relationships (QSARs) were developed using log of the inverse of the IGC50 (logIGC50(-1)) in mole liter as the dependent variable and six molecular descriptors--logP, 1X(V), I, K alpha, sigma sigma- and E(LUMO) as the independent variables. Through multiplicate regression analysis, one best equation was obtained: log IGC50(-1) = 2.93 + 0.830sigma sigma- + 0.350I, n = 26, r = 0.923, r2 = 0.852, s = 0.265, f = 66.4 The equation was used to estimate IGC50 for seven analogues. PMID:12046656

  10. Estimating evaporative vapor generation from automobiles based on parking activities

    A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade–Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5–8% less than calculation without considering parking activity. - Highlights: • We applied real parking distribution data to estimate evaporative vapor generation. • We applied real hourly temperature data to estimate hourly incremental vapor generation rate. • Evaporative emission for Florence is estimated based on parking distribution and hourly rate. - A new approach is proposed to quantify the weighted evaporative vapor generation based on parking distribution with an hourly incremental vapor generation rate

  11. Human ECG signal parameters estimation during controlled physical activity

    Maciejewski, Marcin; Surtel, Wojciech; Dzida, Grzegorz

    2015-09-01

    ECG signal parameters are commonly used indicators of human health condition. In most cases the patient should remain stationary during the examination to decrease the influence of muscle artifacts. During physical activity, the noise level increases significantly. The ECG signals were acquired during controlled physical activity on a stationary bicycle and during rest. Afterwards, the signals were processed using a method based on Pan-Tompkins algorithms to estimate their parameters and to test the method.

  12. Statistical Estimation of the ,Switching Activity in VLSI Circuits

    Farid N. Najm; Michael G. Xakellis

    1998-01-01

    Higher levels of integration have led to a generation of integrated circuits for which power dissipation and reliability are major design concerns. In CMOS circuits, both of these problems are directly related to the extent of circuit switching activity. The average number of transitions per second at a circuit node is a measure of switching activity that has been called the transition density. This paper presents a statistical simulation technique to estimate individual node transition densi...

  13. State space approach for joint estimation of activity and attenuation map from PET emission sinograms

    Liu Huafeng; You Hongshun; Shi Pengcheng

    2007-01-01

    Quantitative estimation of radioactivity map has important clinical implications for better diagnosis and understanding of cancers. Although attenuation map and activity map are usually treated sequentially, they can obviously benefit a great deal when the transmission data is missing. In this paper, we propose a novel scheme of simultaneously solving for attenuation map and activity distribution from emission sinograms. Our strategy combines the measurement model of PET, and the attenuation parameters are treated as random variables with known prior statistics. After the conversion to state space representation, the extended Kalman filtering procedures are adopted to linearize the equations and to provide the joint estimates in an approximate optimal sense. Experiments have been performed on both synthetic data to illustrate its abilities and benefits.

  14. [Quantitative estimation of CaO content in surface rocks using hyperspectral thermal infrared emissivity].

    Zhang, Li-Fu; Zhang, Xue-Wen; Huang, Zhao-Qiang; Yang, Hang; Zhang, Fei-Zhou

    2011-11-01

    The objective of the present paper is to study the quantitative relationship between the CaO content and the thermal infrared emissivity spectra. The surface spectral emissivity of 23 solid rocks samples were measured in the field and the first derivative of the spectral emissivity was also calculated. Multiple linear regression (MLR), principal component analysis (PCR) and partial least squares regression (PLSR) were modeled and the regression results were compared. The results show that there is a good relationship between CaO content and thermal emissivity spectra features; emissivities become lower when CaO content increases in the 10.3-13 mm region; the first derivative spectra have a better predictive ability compared to the original emissivity spectra. PMID:22242490

  15. Estimating activity of uranium ore and solid waste

    From radioactive equilibrium of natural radioactive decay series and content of several radioactive constituents, the formula for estimating activity of uranium ore and solid waste is derived and a case study on treatment engineering of decommissioning Linchang Uranium Mine is presented

  16. A quantitative method to estimate high gloss polished tool steel surfaces

    Rebeggiani, S.; Rosén, B.-G.; Sandberg, A.

    2011-08-01

    Visual estimations are today the most common way to assess the surface quality of moulds and dies; a method that are both subjective and, with today's high demands on surfaces, hardly usable to distinguish between the finest surface qualities. Instead a method based on non-contact 3D-surface texture analysis is suggested. Several types of tool steel samples, manually as well as machine polished, were analysed to study different types of surface defects such as pitting, orange peel and outwardly features. The classification of the defect structures serves as a catalogue where known defects are described. Suggestions of different levels of 'high surface quality' defined in numerical values adapted to high gloss polished tool steel surfaces are presented. The final goal is to develop a new manual that can work as a 'standard' for estimations of tool steel surfaces for steel producers, mould makers, polishers etc.

  17. A quantitative method to estimate high gloss polished tool steel surfaces

    Visual estimations are today the most common way to assess the surface quality of moulds and dies; a method that are both subjective and, with today's high demands on surfaces, hardly usable to distinguish between the finest surface qualities. Instead a method based on non-contact 3D-surface texture analysis is suggested. Several types of tool steel samples, manually as well as machine polished, were analysed to study different types of surface defects such as pitting, orange peel and outwardly features. The classification of the defect structures serves as a catalogue where known defects are described. Suggestions of different levels of 'high surface quality' defined in numerical values adapted to high gloss polished tool steel surfaces are presented. The final goal is to develop a new manual that can work as a 'standard' for estimations of tool steel surfaces for steel producers, mould makers, polishers etc.

  18. A quantitative method to estimate high gloss polished tool steel surfaces

    Rebeggiani, S; Rosen, B-G [Halmstad University, The Functional Surfaces Research Group, Box 823, SE-301 18 HALMSTAD (Sweden); Sandberg, A, E-mail: sabina.rebeggiani@hh.se [Uddeholms AB, SE-683 85 Hagfors (Sweden)

    2011-08-19

    Visual estimations are today the most common way to assess the surface quality of moulds and dies; a method that are both subjective and, with today's high demands on surfaces, hardly usable to distinguish between the finest surface qualities. Instead a method based on non-contact 3D-surface texture analysis is suggested. Several types of tool steel samples, manually as well as machine polished, were analysed to study different types of surface defects such as pitting, orange peel and outwardly features. The classification of the defect structures serves as a catalogue where known defects are described. Suggestions of different levels of 'high surface quality' defined in numerical values adapted to high gloss polished tool steel surfaces are presented. The final goal is to develop a new manual that can work as a 'standard' for estimations of tool steel surfaces for steel producers, mould makers, polishers etc.

  19. Quantitative Estimate of the Relation Between Rolling Resistance on Fuel Consumption of Class 8 Tractor Trailers Using Both New and Retreaded Tires (SAE Paper 2014-01-2425)

    Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.

  20. Quantitative Measurement of Physical Activity in Acute Ischemic Stroke and Transient Ischemic Attack

    Strømmen, Anna Maria; Christensen, Thomas; Jensen, Kai

    2014-01-01

    BACKGROUND AND PURPOSE: The purpose of this study was to quantitatively measure and describe the amount and pattern of physical activity in patients within the first week after acute ischemic stroke and transient ischemic attack using accelerometers. METHODS: A total of 100 patients with acute...... ischemic stroke or transient ischemic attack admitted to our acute stroke unit wore Actical accelerometers attached to both wrists and ankles and the hip for ≤7 days. Patients were included within 72 hours of symptom onset. Accelerometer output was measured in activity counts (AC). Patients were tested...... feasibility of using accelerometers to quantitatively and continuously measure physical activity simultaneously from all 4 extremities and the hip in patients with acute ischemic stroke and transient ischemic attack. Our study provides quantitative evidence of physical inactivity in patients with acute...

  1. Study on Correlation and Quantitative Error Estimation Method Among the Splitting Shear Wave Identification Methods

    Liu Xiqiang; Zhou Huilan; Li Hong; Gai Dianguang

    2000-01-01

    Based on the propagation characteristics of shear wave in the anisotropic layers, thecorrelation among several splitting shear-wave identification methods hasbeen studied. Thispaper puts forward the method estimating splitting shear-wave phases and its reliability byusing of the assumption that variance of noise and useful signal data obey normaldistribution. To check the validity of new method, the identification results and errorestimation corresponding to 95% confidence level by analyzing simulation signals have beengiven.

  2. Quantitative estimates of tropical temperature change in lowland Central America during the last 42 ka

    Grauel, Anna-Lena; Hodell, David A.; Bernasconi, Stefano M.

    2016-03-01

    Determining the magnitude of tropical temperature change during the last glacial period is a fundamental problem in paleoclimate research. Large discrepancies exist in estimates of tropical cooling inferred from marine and terrestrial archives. Here we present a reconstruction of temperature for the last 42 ka from a lake sediment core from Lake Petén Itzá, Guatemala, located at 17°N in lowland Central America. We compared three independent methods of glacial temperature reconstruction: pollen-based temperature estimates, tandem measurements of δ18O in biogenic carbonate and gypsum hydration water, and clumped isotope thermometry. Pollen provides a near-continuous record of temperature change for most of the glacial period but the occurrence of a no-analog pollen assemblage during cold, dry stadials renders temperature estimates unreliable for these intervals. In contrast, the gypsum hydration and clumped isotope methods are limited mainly to the stadial periods when gypsum and biogenic carbonate co-occur. The combination of palynological and geochemical methods leads to a continuous record of tropical temperature change in lowland Central America over the last 42 ka. Furthermore, the gypsum hydration water method and clumped isotope thermometry provide independent estimates of not only temperature, but also the δ18O of lake water that is dependent on the hydrologic balance between evaporation and precipitation over the lake surface and its catchment. The results show that average glacial temperature was cooler in lowland Central America by 5-10 °C relative to the Holocene. The coldest and driest times occurred during North Atlantic stadial events, particularly Heinrich stadials (HSs), when temperature decreased by up to 6 to 10 °C relative to today. This magnitude of cooling is much greater than estimates derived from Caribbean marine records and model simulations. The extreme dry and cold conditions during HSs in the lowland Central America were associated

  3. Long-term accounting for raindrop size distribution variations improves quantitative precipitation estimation by weather radar

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2016-04-01

    Weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources. The current study is focused on the impact of variations of the raindrop size distribution on radar rainfall estimates. Such variations lead to errors in the estimated rainfall intensity (R) and specific attenuation (k) when using fixed relations for the conversion of the observed reflectivity (Z) into R and k. For non-polarimetric radar, this error source has received relatively little attention compared to other error sources. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed in The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. This especially holds for situations where widespread stratiform precipitation is observed. The best results are obtained when the DSD parameters are optimized. However, the optimized Z-R and Z-k relations show an unrealistic variability that arises from uncorrected error sources. As such, the optimization approach does not result in a realistic DSD shape but instead also accounts for uncorrected error sources resulting in the best radar rainfall adjustment. Therefore, to further improve the quality of preciptitation estimates by weather radar, usage should either be made of polarimetric radar or by extending the network of disdrometers.

  4. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    Dongxu WANG; Mackie, T. Rockwell; Wolfgang A. Tomé

    2011-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a cond...

  5. Pharmacognostic Standardization and Quantitative Estimation of some Isolated Phytoconstituents from Croton oblongifolius Roxb.

    Mandal L

    2011-01-01

    Full Text Available Croton oblongifolius Roxb. (Euphorbiaceae is a weed available all over in the agricultural fields of West Bengal villages. Traditionally this plant is used as wound healing drug in the Bengal villages. As there is less number of scientific information present, an attempt has been taken to search some important phytoconstituents and their amount which may act as marker compounds. Simultaneously, a preliminary standardization of this plant was performed by pharmacognostic, morphological and microscopical investigations. Here flavonoids, terpenoids and tannins were isolated and estimated to identify the marker compounds and different standardization parameters and their results were recorded for future requirements.

  6. Estimation of the Accuracy of Method for Quantitative Determination of Volatile Compounds in Alcohol Products

    Charepitsa, S V; Zadreyko, Y V; Sytova, S N

    2016-01-01

    Results of the estimation of the precision for determination volatile compounds in alcohol-containing products by gas chromatography: acetaldehyde, methyl acetate, ethyl acetate, methanol, isopropyl alcohol, propyl alcohol, isobutyl alcohol, butyl alcohol, isoamyl alcohol are presented. To determine the accuracy, measurements were planned in accordance with ISO 5725 and held at the gas chromatograph Crystal-5000. Standard deviation of repeatability, intermediate precision and their limits are derived from obtained experimental data. The uncertainty of the measurements was calculated on the base of an "empirical" method. The obtained values of accuracy indicate that the developed method allows measurement uncertainty extended from 2 to 20% depending on the analyzed compound and measured concentration.

  7. IMPROVED RP-HPLC METHOD FOR QUANTITATIVE ESTIMATION OF STEVIOSIDE IN STEVIA REBAUDIANA BERTONI BURM

    Shankar Katekhaye

    2011-01-01

    Full Text Available An RP-HPLC method with UV array detection was established for the determination of stevioside, an extract of herbal S. rebaudiana plant. The stevioside was separated using isocratic solvent system consisting of methanol and 0.1% orthophosphoric acid (v/v in water (70:30 at flow rate of 1.0 ml/min and the detection wavelength of 219 nm. The method was validated for linearity, precision, accuracy, limit of detection (LOD, and limit of quantitation (LOQ. The linearity of the proposed method was obtained in the range of 5.0-75 μg/ml with regression coefficient of 0.9999. Intraday and interday precision studies showed the relative standard deviation less than 2.5%. The accuracy of the proposed method was determined by a recovery study conducted at 3 different levels. The average recovery was 97-99%. The LOD and LOQ were 0.02 and 0.05 µg/ml, respectively. The content of stevioside obtained in the dried leaves powder was within the ranges of 6.83 – 7.91% and 1.7 – 2.9 % w/w, respectively. The proposed method is simple, sensitive, yet reproducible. It is therefore suitable for routine analysis of stevioside in S. rebaudiana Bertoni.

  8. Quantitative assessment of target dependence of pion fluctuation in hadronic interactions – estimation through erraticity

    Dipak Ghosh; Argha Deb; Mitali Mondal; Arindam Mondal; Sitram Pal

    2012-12-01

    Event-to-event fluctuation pattern of pions produced by proton and pion beams is studied in terms of the newly defined erraticity measures $ (p, q)$, $_{q}^{'}$ and $_{q}^{'}$ proposed by Cao and Hwa. The analysis reveals the erratic behaviour of the produced pions signifying the chaotic multiparticle production in high-energy hadron–nucleus interactions (- –AgBr interactions at 350 GeV/c and –AgBr interactions at 400 GeV/c). However, the chaoticity does not depend on whether the projectile is proton or pion. The results are compared with the results of the VENUS-generated data for the above interactions which suggests that VENUS event generator is unable to reproduce the event-to-event fluctuations of spatial patterns of final states. A comparative study of –AgBr interactions and - collisions at 400 GeV/c from NA27, with the help of a quantitative parameter for the assessment of pion fluctuation, indicates conclusively that particle production process is more chaotic for hadron–nucleus interactions than for hadron–hadron interactions.

  9. Raman spectroscopy of human skin: looking for a quantitative algorithm to reliably estimate human age.

    Pezzotti, Giuseppe; Boffelli, Marco; Miyamori, Daisuke; Uemura, Takeshi; Marunaka, Yoshinori; Zhu, Wenliang; Ikegaya, Hiroshi

    2015-06-01

    The possibility of examining soft tissues by Raman spectroscopy is challenged in an attempt to probe human age for the changes in biochemical composition of skin that accompany aging. We present a proof-of-concept report for explicating the biophysical links between vibrational characteristics and the specific compositional and chemical changes associated with aging. The actual existence of such links is then phenomenologically proved. In an attempt to foster the basics for a quantitative use of Raman spectroscopy in assessing aging from human skin samples, a precise spectral deconvolution is performed as a function of donors' ages on five cadaveric samples, which emphasizes the physical significance and the morphological modifications of the Raman bands. The outputs suggest the presence of spectral markers for age identification from skin samples. Some of them appeared as authentic "biological clocks" for the apparent exactness with which they are related to age. Our spectroscopic approach yields clear compositional information of protein folding and crystallization of lipid structures, which can lead to a precise identification of age from infants to adults. Once statistically validated, these parameters might be used to link vibrational aspects at the molecular scale for practical forensic purposes. PMID:26112367

  10. Quantitative Estimation of Temperature Variations in Plantar Angiosomes: A Study Case for Diabetic Foot

    H. Peregrina-Barreto

    2014-01-01

    Full Text Available Thermography is a useful tool since it provides information that may help in the diagnostic of several diseases in a noninvasive and fast way. Particularly, thermography has been applied in the study of the diabetic foot. However, most of these studies report only qualitative information making it difficult to measure significant parameters such as temperature variations. These variations are important in the analysis of the diabetic foot since they could bring knowledge, for instance, regarding ulceration risks. The early detection of ulceration risks is considered an important research topic in the medicine field, as its objective is to avoid major complications that might lead to a limb amputation. The absence of symptoms in the early phase of the ulceration is conceived as the main disadvantage to provide an opportune diagnostic in subjects with neuropathy. Since the relation between temperature and ulceration risks is well established in the literature, a methodology that obtains quantitative temperature differences in the plantar area of the diabetic foot to detect ulceration risks is proposed in this work. Such methodology is based on the angiosome concept and image processing.

  11. Theoretical framework for quantitatively estimating ultrasound beam intensities using infrared thermography.

    Myers, Matthew R; Giridhar, Dushyanth

    2011-06-01

    In the characterization of high-intensity focused ultrasound (HIFU) systems, it is desirable to know the intensity field within a tissue phantom. Infrared (IR) thermography is a potentially useful method for inferring this intensity field from the heating pattern within the phantom. However, IR measurements require an air layer between the phantom and the camera, making inferences about the thermal field in the absence of the air complicated. For example, convection currents can arise in the air layer and distort the measurements relative to the phantom-only situation. Quantitative predictions of intensity fields based upon IR temperature data are also complicated by axial and radial diffusion of heat. In this paper, mathematical expressions are derived for use with IR temperature data acquired at times long enough that noise is a relatively small fraction of the temperature trace, but small enough that convection currents have not yet developed. The relations were applied to simulated IR data sets derived from computed pressure and temperature fields. The simulation was performed in a finite-element geometry involving a HIFU transducer sonicating upward in a phantom toward an air interface, with an IR camera mounted atop an air layer, looking down at the heated interface. It was found that, when compared to the intensity field determined directly from acoustic propagation simulations, intensity profiles could be obtained from the simulated IR temperature data with an accuracy of better than 10%, at pre-focal, focal, and post-focal locations. PMID:21682428

  12. Quantitative Estimation of Andrographolide by Reverse Phase-High Liquid Chromatography Method from Andrographis Paniculata Nees.

    Dilip Bhaskar Jadhao

    2012-11-01

    Full Text Available Abstract:Reverse Phase High performance liquid chromatographic method with UV array detection was established for the determination of Andrographolide. The Andrographolide was separated using isocratic solvent system consisting of isopropyl alcohol, formic acid and water (70:10:20 v/v at flow rate of 1.0 ml/min and the detection wavelength of 223 nm. The method was validated for linearity, precision, accuracy, limit of detection (LOD, and limit of quantitation (LOQ. The linearity of the proposed method was obtained in the range of 4.5-70 μg/ml for andrographolide with regression coefficient of 0.9999. Intraday and interday precision studies showed the relative standard deviation less than 2.5%. The accuracy of the proposed method was determined by a recovery study conducted at 3 different levels. The average recovery was 97-99%. The LOD and LOQ were 0.03 and 0.05 μg/ml for andrographolide, respectively. The contents of andrographolide obtained in the dried leaves powder was within the ranges of 0.98 – 1.15% w/w, respectively. The proposed method is simple, sensitive yet reproducible. It is therefore suitable for routine analysis of andrographolide in A. Paniculata Nees.

  13. Estimation of thermal neutron flux from natZr activity

    Neutron transmutation doped (NTD) Ge thermistors are developed as low temperature thermometry (in mK range) in the cryogenic Tin bolometer, the India-based TIN detector (TIN.TIN). For this purpose, semiconductor grade Ge wafers are irradiated with thermal neutron at Dhruva reactor, BARC and dopant concentration critically depends on thermal neutron fluence. In order to obtain an independent estimate of the thermal neutron flux, natZr is used in one of the irradiations. The irradiated natZr samples have been studied in the Tifr Low background Experimental Setup (TiLES). The thermal neutron flux is estimated from the activity of 95Zr

  14. Quantitative falls risk estimation through multi-sensor assessment of standing balance

    Falls are the most common cause of injury and hospitalization and one of the principal causes of death and disability in older adults worldwide. Measures of postural stability have been associated with the incidence of falls in older adults. The aim of this study was to develop a model that accurately classifies fallers and non-fallers using novel multi-sensor quantitative balance metrics that can be easily deployed into a home or clinic setting. We compared the classification accuracy of our model with an established method for falls risk assessment, the Berg balance scale. Data were acquired using two sensor modalities—a pressure sensitive platform sensor and a body-worn inertial sensor, mounted on the lower back—from 120 community dwelling older adults (65 with a history of falls, 55 without, mean age 73.7 ± 5.8 years, 63 female) while performing a number of standing balance tasks in a geriatric research clinic. Results obtained using a support vector machine yielded a mean classification accuracy of 71.52% (95% CI: 68.82–74.28) in classifying falls history, obtained using one model classifying all data points. Considering male and female participant data separately yielded classification accuracies of 72.80% (95% CI: 68.85–77.17) and 73.33% (95% CI: 69.88–76.81) respectively, leading to a mean classification accuracy of 73.07% in identifying participants with a history of falls. Results compare favourably to those obtained using the Berg balance scale (mean classification accuracy: 59.42% (95% CI: 56.96–61.88)). Results from the present study could lead to a robust method for assessing falls risk in both supervised and unsupervised environments. (paper)

  15. Quantitative estimation of aesthesiometric thresholds for assessing impaired tactile sensation in workers exposed to vibration.

    Bovenzi, M; Zadini, A

    1989-01-01

    To evaluate the usefulness of aesthesiometric threshold testing in the quantitative assessment of peripheral sensorineural disorders occurring in the hand-arm vibration syndrome, two point discrimination (TPD) and depth sense perception (DSP) thresholds were measured by means of two aesthesiometers in the fingertips of 65 forestry workers exposed to chain saw vibration and 91 healthy males unexposed to local vibration or neurotoxic chemicals. Among the healthy subjects, divided into three age groups, there was no difference in the mean values of TPD and DSP thresholds. Assuming 1.28 or 2 standard deviations above the mean to be the upper limits of normality, in the present study the threshold values for TPD were 2.5 and 3.13 mm, respectively. Using the same assumptions, the normal threshold values for DSP were 0.36 and 0.49 mm. Among the 65 chain saw operators the prevalence of peripheral sensory disturbances was 70.8%. On the basis of the aesthesiometric results obtained for the group of 46 chain sawyers affected with sensorineural symptoms and a control group of 46 manual workers, the specificity of the aesthesiometric testing method was found to range between 93.4 and 100%, while the sensitivity varied from 52.2 to 71.7%. In its predictive value aesthesiometry had a positive accuracy of 84.6-96.0% and a negative accuracy of 42.8-50.0%. Aesthesiometric testing was able to differentiate between normals and vibration workers with sensory disturbances on a group basis (P less than 0.001), but due to the high rate of false negatives among vibration exposed patients, it was unsuitable to confirm objectively sensorineural symptoms on an individual basis.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2777386

  16. Quantitative estimation of lithofacies from seismic data in a tertiary turbidite system in the North Sea

    Joerstad, A.K.; Avseth, P.Aa; Mukerji, T.; Mavko, G.; Granli, J.R.

    1998-12-31

    Deep water clastic systems and associated turbidite reservoirs are often characterized by very complex sand distributions and reservoir description based on conventional seismic and well-log stratigraphic analysis may be very uncertain in these depositional environments. There is shown that reservoirs in turbidite systems have been produced very inefficiently in conventional development. More than 70% of the mobile oil is commonly left behind, because of the heterogeneous nature of these reservoirs. In this study there is examined a turbidite system in the North Sea with five available wells and a 3-D seismic near and far offset stack to establish most likely estimates of facies and pore fluid within the cube. 5 figs.

  17. Usefulness of the automatic quantitative estimation tool for cerebral blood flow. Clinical assessment of the application software tool AQCEL

    AQCEL enables automatic reconstruction of single-photon emission computed tomogram (SPECT) without image degradation and quantitative analysis of cerebral blood flow (CBF) after the input of simple parameters. We ascertained the usefulness and quality of images obtained by the application software AQCEL in clinical practice. Twelve patients underwent brain perfusion SPECT using technetium-99m ethyl cysteinate dimer at rest and after acetazolamide (ACZ) loading. Images reconstructed using AQCEL were compared with those reconstructed using conventional filtered back projection (FBP) method for qualitative estimation. Two experienced nuclear medicine physicians interpreted the image quality using the following visual scores: 0, same; 1, slightly superior; 2, superior. For quantitative estimation, the mean CBF values of the normal hemisphere of the 12 patients using ACZ calculated by the AQCEL method were compared with those calculated by the conventional method. The CBF values of the 24 regions of the 3-dimensional stereotaxic region of interest template (3DSRT) calculated by the AQCEL method at rest and after ACZ loading were compared to those calculated by the conventional method. No significant qualitative difference was observed between the AQCEL and conventional FBP methods in the rest study. The average score by the AQCEL method was 0.25±0.45 and that by the conventional method was 0.17±0.39 (P=0.34). There was a significant qualitative difference between the AQCEL and conventional methods in the ACZ loading study. The average score for AQCEL was 0.83±0.58 and that for the conventional method was 0.08±0.29 (P=0.003). During quantitative estimation using ACZ, the mean CBF values of 12 patients calculated by the AQCEL method were 3-8% higher than those calculated by the conventional method. The square of the correlation coefficient between these methods was 0.995. While comparing the 24 3DSRT regions of 12 patients, the squares of the correlation coefficient

  18. 76 FR 9637 - Proposed Information Collection (Veteran Suicide Prevention Online Quantitative Surveys) Activity...

    2011-02-18

    ... AFFAIRS Proposed Information Collection (Veteran Suicide Prevention Online Quantitative Surveys) Activity... outreach efforts on the prevention of suicide among Veterans and their families. DATES: Written comments...). Type of Review: New collection. Abstract: VA's top priority is the prevention of Veterans suicide....

  19. QUANTITATIVE ESTIMATION OF DNA ISOLATED FROM VARIOUS PARTS OF ANNONA SQUAMOSA

    Soni Himesh; Singhai A.K.; Sharma Sarvesh

    2011-01-01

    Plants have been one of the important sources of medicines since the beginning of human civilization. There is a growing demand for plant based medicines, health products, pharmaceuticals, food supplements, cosmetics etc. Annona squamosa Linn is a multipurpose tree with edible fruits & is a source one of the medicinal & industrial products. Annona squamosa Linn is used as an antioxidant, antidiabetics, hepatoprotective, cytotoxicactivity, genetoxicity, antitumor activity, antilice agent. It i...

  20. Simultaneous quantitative analysis of 12 methoxyflavones with melanogenesis inhibitory activity from the rhizomes of Kaempferia parviflora.

    Ninomiya, Kiyofumi; Matsumoto, Taku; Chaipech, Saowanee; Miyake, Sohachiro; Katsuyama, Yushi; Tsuboyama, Akihiro; Pongpiriyadacha, Yutana; Hayakawa, Takao; Muraoka, Osamu; Morikawa, Toshio

    2016-04-01

    A methanol extract from the rhizomes of Kaempferia parviflora Wall. ex Baker (Zingiberaceae) has shown inhibitory effects against melanogenesis in theophylline-stimulated murine B16 melanoma 4A5 cells (IC50 = 9.6 μg/mL). Among 25 flavonoids and three acetophenones isolated previously (1-28), several constituents including 5-hydroxy-7,3',4'-trimethoxyflavone (6, IC50 = 8.8 μM), 5,7,3',4'-tetramethoxyflavone (7, 8.6 μM), 5,3'-dihydroxy-3,7,4'-trimethoxyflavone (12, 2.9 μM), and 5-hydroxy-3,7,3',4'-tetramethoxyflavone (13, 3.5 μM) showed inhibitory effects without notable cytotoxicity at the effective concentrations. Compounds 6, 7, 12, and 13 inhibited the expression of tyrosinase, tyrosine-related protein (TRP)-1, and TRP-2 mRNA, which could be the mechanism of their melanogenesis inhibitory activity. In addition, a quantitative analytical method for 12 methoxyflavones (1, 2, 4-11, 13, and 14) in the extract was developed using HPLC. The optimal condition for separation and detection of these constituents were achieved on an ODS column (3 μm particle size, 2.1 mm i.d. × 100 mm) with MeOH-0.1 % aqueous acetic acid solvent systems as the mobile phase, and the detection and quantitation limits of the method were estimated to be 0.08-0.66 ng and 0.22-2.00ng, respectively. The relative standard deviation values of intra- and interday precision were lower than 0.95 and 1.08 %, respectively, overall mean recoveries of all flavonoids were 97.9-102.9 %, and the correlation coefficients of all the calibration curves showed good linearity within the test ranges. For validation of the protocol, extracts of three kinds of the plant's rhizomes collected from different regions in Thailand (Leoi, Phetchabun, and Chiang Mai provinces) were evaluated. The results indicated that the assay was reproducible, precise, and could be readily utilized for the quality evaluation of the plant materials. PMID:26711832

  1. A new quantitative approach for estimating bone cell connections from nano-CT images.

    Dong, Pei; Pacureanu, Alexandra; Zuluaga, Maria A; Olivier, Cécile; Frouin, Frédérique; Grimal, Quentin; Peyrin, Françoise

    2013-01-01

    Recent works highlighted the crucial role of the osteocyte system in bone fragility. The number of canaliculi of osteocyte lacuna (Lc.NCa) is an important parameter that reflects the functionality of bone tissue, but rarely reported due to the limitations of current microscopy techniques, and only assessed from 2D histology sections. Previously, we showed the Synchrotron Radiation nanotomography (SR-nanoCT) is a promising technique to image the 3D lacunar-canalicular network. Here we present, for the first time, an automatic method to quantify the connectivity of bone cells in 3D. After segmentation, our method first separates and labels each lacuna in the network. Then, by creating a bounding surface around lacuna, the Lc.NCa is calculated through estimating 3D topological parameters. The proposed method was successfully applied to a 3D SR-nanoCT image of cortical femoral bone. Statistical results on 165 lacunae are reported, showing a mean of 51, which is consistent with the literature. PMID:24110532

  2. [Quantitative estimation of glycyrrhizic acid and liquiritin contents using in-situ canopy spectroscopy].

    Ding, Ling; Li, Hong-Yi; Zhang, Xue-Wen

    2014-07-01

    The present study is the first to attempt to apply the in situ hyperspectral data of G. uralensis canopy in visible-shortwave infrared region (Vis-SWIR) to estimate quantification of GA and LQ contents of glycyrrhiza uralensis. After first derivative preprocessing and feature bands selection by Wilks' lambda stepwise method, partial least squares (PLS) regression with high performance liquid chromatography (HPLC) as reference was constructed to predict the value of GA and LQ contents, respectively. With the nine selected bands and PLS regression model, GA regression accuracy of R2 is 0.953, root mean square errors of calibration set (RMSEC) is 0.31, prediction accuracy R2 is 0.875 and root mean square errors of validation set (RMSEP) is 0.39; LQ regression accuracy of R2 is 0.932, RMSEC is 0.22, prediction accuracy R2 is 0.883 and RMSEP is 0.27; The results showed that our methods provided acceptable results and implied the ability of determining GA and LQ contents from remotely sensed data. It is recommended that an advanced study be conducted in field condition using airborne and/or spaceborne hyperspectral sensors. PMID:25269311

  3. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits

    Thomson Peter C

    2003-05-01

    Full Text Available Abstract To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits.

  4. A generalized estimating equations approach to quantitative trait locus detection of non-normal traits.

    Thomson, Peter C

    2003-01-01

    To date, most statistical developments in QTL detection methodology have been directed at continuous traits with an underlying normal distribution. This paper presents a method for QTL analysis of non-normal traits using a generalized linear mixed model approach. Development of this method has been motivated by a backcross experiment involving two inbred lines of mice that was conducted in order to locate a QTL for litter size. A Poisson regression form is used to model litter size, with allowances made for under- as well as over-dispersion, as suggested by the experimental data. In addition to fixed parity effects, random animal effects have also been included in the model. However, the method is not fully parametric as the model is specified only in terms of means, variances and covariances, and not as a full probability model. Consequently, a generalized estimating equations (GEE) approach is used to fit the model. For statistical inferences, permutation tests and bootstrap procedures are used. This method is illustrated with simulated as well as experimental mouse data. Overall, the method is found to be quite reliable, and with modification, can be used for QTL detection for a range of other non-normally distributed traits. PMID:12729549

  5. Quantitative estimation of sediment erosion and accretion processes in a micro-tidal coast

    G.Udhaba DORA; V.Sanil KUMAR; P.VINAYARAJ; C.S.PHILIP; G.JOHNSON

    2014-01-01

    Spatio-temporal cross-shore profiles and textural characteristics are the key parameters for understanding dynamics of the inter-tidal sedimentary environment. This study describes short-term dynamics of the inter-tidal sedimentary environment at beaches along the micro-tidal coast. Further a correlation is estimated in cross-shore morphodynamics and textural characteristics of surface sediments. The sedimentary environment is examined for a complete annual cycle using monthly collected cross-shore profiles and sediment samples. The Devbag beach (northern side) and Ravindranath Tagore beach (southern side) at the Kali river mouth, Karwar, west coast of India are characterized from extremely gentle to average slope, and broadly composed of unimodal sands. The sedimentary environment is significantly composed of textures having fine to medium sand, well to moderately sorted, fine to coarse skewed, and platykurtic to leptokurtic in nature. During the annual cycle a reversal pattern is observed between the two adjacent beaches, where a slower rate of sediment accretion is observed at Devbag beach while Ravindranath Tagore beach exhibited erosion. The beach dynamics along with the propagation of south-west (SW) and south-west-west (SWW) waves towards the coast significantly exhibit a dominance of northward sediment transport with the existence of a northerly alongshore current. In addition, the study reveals that an eroded beach may not be significantly identified composed of coarse grains. The poor correlation in morpho-sedimentary characteristics reveals the prediction of grain characteristics based on beach profile and vice-versa is unrealistic.

  6. Statistical estimations for predicting the detection limit of low activities

    When extremely low activities are measured, the statistics of the observed decay events may be insufficient for a justified application of statistical assessments based on the Gaussian distribution. Student's t-distribution and the theory of the interval estimation are used as the basis for a statistical model for predicting the detection limit and the signal-to-noise ratio which could be reached under the conditions of the measurement. The derived statistical estimations are applicable in cases when a small number of decay events is expected to be recorded. The minimum detectable activity characterizing the detection limit under the conditions of the measurement, is determined at the given confidence limits and assumed permissible relative statistical errors during the measurement of the sample and the background (within the available time limits). The derived statistical estimations can be used for comparing the possibilities offered by the different measuring methods applied for determination of extremely low activities. These evaluations can also be used as a criterion for discussing the reliability of the measurement results. (author). 6 refs

  7. Quantitative precipitation estimation based on high-resolution numerical weather prediction and data assimilation with WRF – a performance test

    Hans-Stefan Bauer

    2015-04-01

    Full Text Available Quantitative precipitation estimation and forecasting (QPE and QPF are among the most challenging tasks in atmospheric sciences. In this work, QPE based on numerical modelling and data assimilation is investigated. Key components are the Weather Research and Forecasting (WRF model in combination with its 3D variational assimilation scheme, applied on the convection-permitting scale with sophisticated model physics over central Europe. The system is operated in a 1-hour rapid update cycle and processes a large set of in situ observations, data from French radar systems, the European GPS network and satellite sensors. Additionally, a free forecast driven by the ECMWF operational analysis is included as a reference run representing current operational precipitation forecasting. The verification is done both qualitatively and quantitatively by comparisons of reflectivity, accumulated precipitation fields and derived verification scores for a complex synoptic situation that developed on 26 and 27 September 2012. The investigation shows that even the downscaling from ECMWF represents the synoptic situation reasonably well. However, significant improvements are seen in the results of the WRF QPE setup, especially when the French radar data are assimilated. The frontal structure is more defined and the timing of the frontal movement is improved compared with observations. Even mesoscale band-like precipitation structures on the rear side of the cold front are reproduced, as seen by radar. The improvement in performance is also confirmed by a quantitative comparison of the 24-hourly accumulated precipitation over Germany. The mean correlation of the model simulations with observations improved from 0.2 in the downscaling experiment and 0.29 in the assimilation experiment without radar data to 0.56 in the WRF QPE experiment including the assimilation of French radar data.

  8. Quantitative estimation of crystalline phases of cordierite-mullite based kiln furniture by X-ray diffraction method

    Thermal endurance of ceramics as refractory support is closely related to the crystalline phases present. In the alumino-silicate refractory family, cordierite-mullite system, produced by using commercially available raw materials, offers an attractive proposition up to a temperature range of 1280 deg C. Cordierite body formed in situ with high expansion grog as additives contains the following crystalline phases : (i) cordierite; (ii)mullite; (iii)α-Al2O3; (iv)cristobalite and (v) quartz. Of these, only the first three phases are relevant in enhancing the life cycle of kiln furniture vis-a-vis their thermal endurance. Hence a quantitative phase analysis is necessary to predict the life of kiln furniture and improve upon their performance. The present method of quantitative phase analysis is applicable where the crystal structure of the component phase whose proportionate analysis in a phase mixture is required, is known. Simulated intensity patterns based on the crystallographic data, viz. space group symmetry, lattice parameters, positional coordinates of various atoms and their occupancy factors, crystal density etc. were prepared for each reflection zone. Relevant computer programmes also were developed. The intensity data collected in different 2θ zones were used for finding the volume concentrations of different phases. The error in quantitative estimation of phases was nearly 6% when the diffraction data were taken using monochromatic X-radiation. The chemical analysis corroborates this result. The materials contained a certain amount of glassy phase. Assuming that the glassy phase was composed mainly of silica glass along with other impurities, the result showed that only 85% of the total phase was crystalline and the remaining part was amorphous. (author). 10 refs., 1 fig., 4 tabs

  9. Bi-temporal 3D active appearance models with applications to unsupervised ejection fraction estimation

    Stegmann, Mikkel Bille; Pedersen, Dorthe

    Rapid and unsupervised quantitative analysis is of utmost importance to ensure clinical acceptance of many examinations using cardiac magnetic resonance imaging (MRI). We present a framework that aims at fulfilling these goals for the application of left ventricular ejection fraction estimation in...... four-dimensional MRI. The theoretical foundation of our work is the generative two-dimensional Active Appearance Models by Cootes et al., here extended to bi-temporal, three-dimensional models. Further issues treated include correction of respiratory induced slice displacements, systole detection, and...... a texture model pruning strategy. Cross-validation carried out on clinical-quality scans of twelve volunteers indicates that ejection fraction and cardiac blood pool volumes can be estimated automatically and rapidly with accuracy on par with typical inter-observer variability....

  10. Quantitative Structure – Antioxidant Activity Relationships of Flavonoid Compounds

    Károly Héberger

    2004-12-01

    Full Text Available A quantitative structure – antioxidant activity relationship (QSAR study of 36 flavonoids was performed using the partial least squares projection of latent structures (PLS method. The chemical structures of the flavonoids have been characterized by constitutional descriptors, two-dimensional topological and connectivity indices. Our PLS model gave a proper description and a suitable prediction of the antioxidant activities of a diverse set of flavonoids having clustering tendency.

  11. Validation of quantitative IR thermography for estimating the U-value by a hot box apparatus

    Nardi, I.; Paoletti, D.; Ambrosini, D.; de Rubeis, T.; Sfarra, S.

    2015-11-01

    Energy saving plays a key role in the reduction of energy consumption and carbon emission, and therefore it is essential for reaching the goal of the 20-20-2020 policy. In particular, buildings are responsible of about 30% of the total amount of Europe energy consumption; the increase of their energy efficiency with the reduction of the thermal transmittance of the envelope is a point of strength with the actions and strategies of the policy makers. Currently, the study of energy performance of buildings is based on international standards, in particular the Italian one allows to calculate the U-value according the ISO 6946 or by in-situ measurements, using a heat flow meter (HFM), following recommendations provided in ISO 9869. In the last few years, a new technique, based on Infrared Thermography (IRT) (also referred to as Infrared Thermovision Technique - ITT), has been proposed for in situ determination of the thermal transmittance of opaque building elements. Some case studies have been reported. This method has already been applied on existing buildings, providing reliable results, but also revealing some weaknesses. In order to overcome such weak points and to assess a systematic procedure for the application of IRT, a validation of the method has been performed in a monitored environment. Infrared camera, the heat flow meter sensors and a nearby meteorological station have been used for thermal transmittance measurement. Comparison between the U-values measured in a hot box with IRT as well as values calculated following international standards and HFM results has been effected. Results give a good description of the advantages, as well as of the open problems, of IR Thermography for estimating the U-value. Further studies will help to refine the technique, and to identify the best operative conditions.

  12. A Quantitative Method to Estimate Vulnerability. Case Study: Motozintla de Mendoza, Chiapas

    Rodriguez, F.; Novelo-Casanova, D. A.

    2011-12-01

    The community of Motozintla de Mendoza is located in the State of Chiapas, México (15' 22' N and 92' 15' W) near to the international border with Guatemala. Due to its location, this community is continuously exposed to many different hazards. Motozintla has a population of 20,000 inhabitants. This community suffered the impact of had two disasters recently. In view of these scenarios we carried out the present research with the objective quantifying the vulnerability of this community. We prepared a tool that allow us to document the physical vulnerability conducting interviews with people in risk situation. Our tool included the analysis of five elements: household structure and public services, socioeconomic characteristics, community preparation for facing a disaster situation, and risk perception of the inhabitants using a sample statistically significant. Three field works were carried out (October and November 2009, and October 2010) and 444 interviews were registered. Five levels of vulnerability were considered: very high, high, middle, moderate and low. Our region of study was classified spatially and the different estimated levels of vulnerability were located in geo referenced on maps. Our results indicate that the locality has a high level of physical vulnerability because about 74% of the population reports that their household had suffered damages in the past; 86% of the households present low resistance building materials; 70% of the interviewed families has a daily income under five to fifteen dollars; 66% of population does not know any existing Civil Protection Plan; 83% of the population considers that they live in a high level of risk due to floods; finally, the community organization is practically nonexistent. In conclusion, the level of vulnerability of Motozintla is high due to the many factors to which is exposed, in addition, to the structural, socioeconomic and cultural characteristics of their inhabitants. Evidently, those elements of

  13. First quantitative bias estimates for tropospheric NO2 columns retrieved from SCIAMACHY, OMI, and GOME-2 using a common standard

    X. Pan

    2012-06-01

    Full Text Available For the intercomparison of tropospheric nitrogen dioxide NO2 vertical column density (VCD data from three different satellite sensors (SCIAMACHY, OMI, and GOME-2, we use a common standard to quantitatively evaluate the biases for the respective data sets. As the standard, a regression analysis using a single set of collocated ground-based Multi-Axis Differential Optical Absorption Spectroscopy (MAX-DOAS observations at several sites in Japan and China in 2006–2011 is adopted. Examination of various spatial coincidence criteria indicates that the slope of the regression line can be influenced by the spatial distribution of NO2 over the area considered. While the slope varies systematically with the distance between the MAX-DOAS and satellite observation points around Tokyo in Japan, such a systematic dependence is not clearly seen and correlation coefficients are generally higher in comparisons at sites in China. On the basis of these results, we focus mainly on comparisons over China and best estimate the biases in SCIAMACHY, OMI, and GOME-2 data (TM4NO2A and DOMINO version 2 products against the MAX-DOAS observations to be −5±14 %, −10±14 %, and +1±14 %, respectively, which are all small and insignificant. We suggest that these small biases now allow analyses combining these satellite data for air quality studies that are more systematic and quantitative than previously possible.

  14. Quantitative structure-activity relationships for organophosphates binding to trypsin and chymotrypsin.

    Ruark, Christopher D; Hack, C Eric; Robinson, Peter J; Gearhart, Jeffery M

    2011-01-01

    Organophosphate (OP) nerve agents such as sarin, soman, tabun, and O-ethyl S-[2-(diisopropylamino) ethyl] methylphosphonothioate (VX) do not react solely with acetylcholinesterase (AChE). Evidence suggests that cholinergic-independent pathways over a wide range are also targeted, including serine proteases. These proteases comprise nearly one-third of all known proteases and play major roles in synaptic plasticity, learning, memory, neuroprotection, wound healing, cell signaling, inflammation, blood coagulation, and protein processing. Inhibition of these proteases by OP was found to exert a wide range of noncholinergic effects depending on the type of OP, the dose, and the duration of exposure. Consequently, in order to understand these differences, in silico biologically based dose-response and quantitative structure-activity relationship (QSAR) methodologies need to be integrated. Here, QSAR were used to predict OP bimolecular rate constants for trypsin and α-chymotrypsin. A heuristic regression of over 500 topological/constitutional, geometric, thermodynamic, electrostatic, and quantum mechanical descriptors, using the software Ampac 8.0 and Codessa 2.51 (SemiChem, Inc., Shawnee, KS), was developed to obtain statistically verified equations for the models. General models, using all data subsets, resulted in R(2) values of .94 and .92 and leave-one-out Q(2) values of 0.9 and 0.87 for trypsin and α-chymotrypsin. To validate the general model, training sets were split into independent subsets for test set evaluation. A y-randomization procedure, used to estimate chance correlation, was performed 10,000 times, resulting in mean R(2) values of .24 and .3 for trypsin and α-chymotrypsin. The results show that these models are highly predictive and capable of delineating the complex mechanism of action between OP and serine proteases, and ultimately, by applying this approach to other OP enzyme reactions such as AChE, facilitate the development of biologically based

  15. Quantitative structure-activity relationships (QSARs) for the transformation of organic micropollutants during oxidative water treatment.

    Lee, Yunho; von Gunten, Urs

    2012-12-01

    Various oxidants such as chlorine, chlorine dioxide, ferrate(VI), ozone, and hydroxyl radicals can be applied for eliminating organic micropollutant by oxidative transformation during water treatment in systems such as drinking water, wastewater, and water reuse. Over the last decades, many second-order rate constants (k) have been determined for the reaction of these oxidants with model compounds and micropollutants. Good correlations (quantitative structure-activity relationships or QSARs) are often found between the k-values for an oxidation reaction of closely related compounds (i.e. having a common organic functional group) and substituent descriptor variables such as Hammett or Taft sigma constants. In this study, we developed QSARs for the oxidation of organic and some inorganic compounds and organic micropollutants transformation during oxidative water treatment. A number of 18 QSARs were developed based on overall 412 k-values for the reaction of chlorine, chlorine dioxide, ferrate, and ozone with organic compounds containing electron-rich moieties such as phenols, anilines, olefins, and amines. On average, 303 out of 412 (74%) k-values were predicted by these QSARs within a factor of 1/3-3 compared to the measured values. For HO(·) reactions, some principles and estimation methods of k-values (e.g. the Group Contribution Method) are discussed. The developed QSARs and the Group Contribution Method could be used to predict the k-values for various emerging organic micropollutants. As a demonstration, 39 out of 45 (87%) predicted k-values were found within a factor 1/3-3 compared to the measured values for the selected emerging micropollutants. Finally, it is discussed how the uncertainty in the predicted k-values using the QSARs affects the accuracy of prediction for micropollutant elimination during oxidative water treatment. PMID:22939392

  16. Quantitative Structure--Activity Relationship (QSAR) for the Oxidation of Trace Organic Contaminants by Sulfate Radical.

    Xiao, Ruiyang; Ye, Tiantian; Wei, Zongsu; Luo, Shuang; Yang, Zhihui; Spinney, Richard

    2015-11-17

    The sulfate radical anion (SO4•–) based oxidation of trace organic contaminants (TrOCs) has recently received great attention due to its high reactivity and low selectivity. In this study, a meta-analysis was conducted to better understand the role of functional groups on the reactivity between SO4•– and TrOCs. The results indicate that compounds in which electron transfer and addition channels dominate tend to exhibit a faster second-order rate constants (kSO4•–) than that of H–atom abstraction, corroborating the SO4•– reactivity and mechanisms observed in the individual studies. Then, a quantitative structure activity relationship (QSAR) model was developed using a sequential approach with constitutional, geometrical, electrostatic, and quantum chemical descriptors. Two descriptors, ELUMO and EHOMO energy gap (ELUMO–EHOMO) and the ratio of oxygen atoms to carbon atoms (#O:C), were found to mechanistically and statistically affect kSO4•– to a great extent with the standardized QSAR model: ln kSO4•– = 26.8–3.97 × #O:C – 0.746 × (ELUMO–EHOMO). In addition, the correlation analysis indicates that there is no dominant reaction channel for SO4•– reactions with various structurally diverse compounds. Our QSAR model provides a robust predictive tool for estimating emerging micropollutants removal using SO4•– during wastewater treatment processes. PMID:26451961

  17. Estimating Active Transportation Behaviors to Support Health Impact Assessment in the United States.

    Mansfield, Theodore J; Gibson, Jacqueline MacDonald

    2016-01-01

    Health impact assessment (HIA) has been promoted as a means to encourage transportation and city planners to incorporate health considerations into their decision-making. Ideally, HIAs would include quantitative estimates of the population health effects of alternative planning scenarios, such as scenarios with and without infrastructure to support walking and cycling. However, the lack of baseline estimates of time spent walking or biking for transportation (together known as "active transportation"), which are critically related to health, often prevents planners from developing such quantitative estimates. To address this gap, we use data from the 2009 US National Household Travel Survey to develop a statistical model that estimates baseline time spent walking and biking as a function of the type of transportation used to commute to work along with demographic and built environment variables. We validate the model using survey data from the Raleigh-Durham-Chapel Hill, NC, USA, metropolitan area. We illustrate how the validated model could be used to support transportation-related HIAs by estimating the potential health benefits of built environment modifications that support walking and cycling. Our statistical model estimates that on average, individuals who commute on foot spend an additional 19.8 (95% CI 16.9-23.2) minutes per day walking compared to automobile commuters. Public transit riders walk an additional 5.0 (95% CI 3.5-6.4) minutes per day compared to automobile commuters. Bicycle commuters cycle for an additional 28.0 (95% CI 17.5-38.1) minutes per day compared to automobile commuters. The statistical model was able to predict observed transportation physical activity in the Raleigh-Durham-Chapel Hill region to within 0.5 MET-hours per day (equivalent to about 9 min of daily walking time) for 83% of observations. Across the Raleigh-Durham-Chapel Hill region, an estimated 38 (95% CI 15-59) premature deaths potentially could be avoided if the entire

  18. Estimating active transportation behaviors to support health impact assessment in the United States

    Theodore J Mansfield

    2016-05-01

    Full Text Available Health impact assessment (HIA has been promoted as a means to encourage transportation and city planners to incorporate health considerations into their decision-making. Ideally, HIAs would include quantitative estimates of the population health effects of alternative planning scenarios, such as scenarios with and without infrastructure to support walking and cycling. However, the lack of baseline estimates of time spent walking or biking for transportation (together known as active transportation, which are critically related to health, often prevents planners from developing such quantitative estimates. To address this gap, we use data from the 2009 US National Household Travel Survey to develop a statistical model that estimates baseline time spent walking and biking as a function of the type of transportation used to commute to work along with demographic and built environment variables. We validate the model using survey data from the Raleigh-Durham-Chapel Hill, NC, metropolitan area. We illustrate how the validated model could be used to support transportation-related HIAs by estimating the potential health benefits of built environment modifications that support walking and cycling. Our statistical model estimates that on average, individuals who commute on foot spend an additional 19.8 (95% CI 16.9–23.2 minutes per day walking compared to automobile commuters. Public transit riders walk an additional 5.0 (95% CI 3.5–6.4 minutes per day compared to automobile commuters. Bicycle commuters cycle for an additional 28.0 (95% CI 17.5–38.1 minutes per day compared to automobile commuters. The statistical model was able to predict observed transportation physical activity in the Raleigh-Durham-Chapel Hill region to within 0.5 MET-hours per day (equivalent to about 9 minutes of daily walking time for 83% of observations. Across the Raleigh-Durham-Chapel Hill region, an estimated 38 (95% CI 15–59 premature deaths potentially could be

  19. Contribution of Quantitative Methods of Estimating Mortality Dynamics to Explaining Mechanisms of Aging.

    Shilovsky, G A; Putyatina, T S; Markov, A V; Skulachev, V P

    2015-12-01

    . makes it possible to approximately divide animals and plants only by their levels of the Gompertz type of senescence (i.e. actuarial senescence), whereas susceptibility to biological senescence can be estimated only when principally different models are applied. PMID:26638679

  20. Quantitative Estimates of Cloudiness over the Gulf Stream Locale Using GOES VAS Observations.

    Alliss, Randall J.; Raman, Sethu

    1995-02-01

    Fields of cloudiness derived from the Geostationary Operational Environmental Satellite VISSR (Visible Infrared Spin Scan Radiometer) Atmospheric Sounder are analyzed over the Gulf Stream locale (GSL) to investigate seasonal and geographical variations. The GSL in this study is defined as the region bounded from 31° to 38°N and 82° to 66°W. This region covers an area that includes the United States mid-Atlantic coast states, the Gulf Stream, and portions of the Sargasso Sea. Clouds over the GSL are found approximately three-quarters of the time between 1985 and 1993. However, large seasonal variations in the frequency of cloudiness exist. These seasonal variations show a distinct relationship to gradients in sea surface temperature (SST). For example, during winter when large SST gradients are present, large gradients in cloudiness are found. Clouds are observed least often during summer over the ocean portion of the GSL. This minimum coincides with an increase in atmospheric stability due to large-scale subsidence. Cloudiness is also found over the GSL in response to mesoscale convergence areas induced by sea surface temperature gradients. Geographical variations in cloudiness are found to be related to the meteorology of the region. During periods of cold-air advection, which are found most frequently in winter, clouds are found less often between the coastline and the core of the Gulf Stream and more often over the Sargasso Sea. During cyclogenesis, large cloud shields often develop and cover the entire domain.Satellite estimates of cloudiness are found to be least reliable over land at night during the cold months. In these situations, the cloud retrieval algorithm often mistakes clear sky for low clouds. Satellite-derived cloudiness over land is compared with daytime surface observations of cloudiness. Results indicate that retrieved cloudiness agrees well with surface observations. Relative humidity fields taken from global analyses are compared with

  1. The effect of sports activities in children and adolescents on the calcaneus - an investigation with quantitative ultrasound

    Purpose: To determine whether quantitative ultrasound (QUS) parameters speed of sound (SOS) and broadband ultrasound attenuation (BUA) on the calcaneus are different between athletic children and a reference population. Patients and Methods: From a college of physical education, 177 children and adolescents (121 boys and 56 girls, age range from 11 to 18 years) were included in this study. QUS was performed on the calcaneus using the Sahara trademark device (Hologic, USA). SOS and BUA were estimated. Regional reference values of 3299 children were used to determine significant differences between athletes and reference population. The influence of activity level, age, height, and weight was estimated using correlation analysis. Results: Sportsmen showed significant (p<0.05) higher values of the QUS parameters (SOS 1581.1 m/s; BUA 69.7 dB/MHz) compared to the reference data (SOS 1563.9 m/s; BUA 64.2 dB/MHz). Significant correlation was observed between BUA and the level of activity, age, weight, and height (p<0.01) and between SOS and weight and height (p<0.05). In the group of soccer players and athletes, significant correlation was found between BUA vs. age and BUA vs. weight (p<0.05). Furthermore, significant correlation was observed between BUA vs. age and weight in Judokas and Wrestlers. For the level of activity, a significant correlation to BUA was only found in the group of Judokas and Wrestlers (p<0.01). Conclusion: An increase in quantitative ultrasound parameters on the calcaneus occurs in children and adolescents with increased physical activity. (orig.)

  2. Hyperspectral Estimation of Corn Fraction of Photosynthetically Active Radiation

    YANG Fei; ZHANG Bai; SONG Kai-shan; WANG Zong-ming; YOU Jin-chun; LIU Dian-wei; XU Jing-ping

    2007-01-01

    Fraction of absorbed photosynthetically active radiation (FPAR) is one of the important variables in many vegetation productivity and biomass estimation models. Therefore, it is significant to retrieve FPAR accurately for the improvement of model precision. On the basis of the field experiment, this article analyzed the correlations between corn canopy FPAR and spectral reflectance, and reflectance derivative. Discussion about the mechanism of FPAR estimation with different empirical models is based upon corn canopy reflectance, reflectance derivative, NDVI (normalized difference vegetation index) and RVI (ratio vegetation index). The reflectance of visible bands showed much better correlations with FPAR than near-infrared bands. The correlation between FPAR and reflectance derivative varied more frequently and greatly than that between FPAR and reflectance, and with preferable correlation only around 520, 570, 670, 805, 950, and 1010 nm.Reflectance and reflectance derivative both had intimate correlation with FPAR at some typical single band, with the maximum R2 of 0.791 and 0.882, respectively. In a word, reflectance derivative and vegetation index were much effective in the estimation of corn FPAR than reflectance, and the stepwise regression of multibands with reflectance derivative showed the best regression with R2 of 0.944. Reflectance at 375 and 950 nm with absorption characteristics caused by water showed prodigious potential for FPAR precisely estimating model establishment. On the whole, vegetation index and reflectance derivative had good relationships with FPAR, and could be used for FAPR estimation. It would be effective for choosing right bands and excavating the hyperspectral data to improve FPAR estimating precision.

  3. Estimation of restraint stress in rats using salivary amylase activity.

    Matsuura, Tetsuya; Takimura, Ryo; Yamaguchi, Masaki; Ichinose, Mitsuyuki

    2012-09-01

    The rat is an ideal model animal for studying physical and psychological stresses. Recent human studies have shown that salivary amylase activity is a useful biomarker of stress in our social life. To estimate the usefulness of amylase activity as a biomarker of stress in rats, we analyzed changes in physiological parameters including amylase activity and anatomical variables, which were induced by a mild restraint of paws (10 min, 3 times/week, 9 weeks). The quantities of food and water intake and excretion amount of the stress rats were smaller than those of the control rats during the experimental period (5-13 weeks). The body weight of the stress rats decreased compared with that of the control rats. Moreover, the enlargement of the adrenal gland was confirmed in the stress rats, indicating that the mild restraint caused a chronic stress response. The amylase activities of the stress rats were significantly greater than those of the control rats at 5 weeks of age. However, the amylase activity of the stress rats decreased compared with that of the control rats after 6 weeks of age. These results indicate that amylase activity is increased by acute stress and reduced by chronic stress, which is caused by repeated restraint stress. In conclusion, amylase activity is a useful biomarker of acute and chronic stresses in rats. PMID:22753135

  4. A Novel Approach using Hydrotropic Solubalization Technique for Quantitative Estimation of Entacapone in Bulk Drug and Dosage Form

    Ruchi Jain

    2013-08-01

    Full Text Available Purpose: Analysis of drug utilized the organic solvent which are costlier, toxic and causing environment pollution. Hydrotropic solution may be a proper choice to preclude the use of organic solvents so that a simple, accurate, novel, safe and precise method has been developed for estimation of poorly water soluble drug Entacapone (Water Solubility-7.97e-02 g/l. Methods: Solubility of entacapone is increased by using 8M Urea as hydrotropic agent. There was more than 67 fold solubility enhanced in hydrotropic solution as compare with distilled water. The entacapone (ENT shows the maximum absorbance at 378 nm. At this wavelength hydrotropic agent and other tablet excipients do not shows any significant interference in the spectrophotometric assay. Results: The developed method was found to be linear in the range of 4-20 μg/ml with correlation coefficient (r2 of 0.9998. The mean percent label claims of tablets of ENT in tablet dosage form estimated by the proposed method were found to be 99.17±0.63. The developed methods were validated according to ICH guidelines and values of accuracy, precision and other statistical analysis were found to be in good accordance with the prescribed values. Conclusion: As hydrotropic agent used in the proposed method so this method is Ecofriendly and it can be used in routine quantitative analysis of drug in bulk drug and dosage form in industries.

  5. Quantitative structure activity relationship study of anticonvulsant activity of α_substituted acetamido-N-benzylacetamide derivatives

    Usman Abdulfatai

    2016-12-01

    Full Text Available To develop the quantitative structure–activity relationship (QSAR for predicting the anticonvulsant activity of α_substituted acetamido-N-benzylacetamide derivatives. Density Functional Theory (B3LYP/6-31G* quantum chemical calculation method was used to find the optimized geometry of the studied molecules. Nine types of molecular descriptors were used to derive a quantitative relation between anticonvulsant activity and structural properties. The relevant molecular descriptors were selected by genetic algorithm approximation. The high value of the correlation coefficient, (R2 of 0.98, indicates that the model was satisfactory. The proposed model has good stability, robustness, and predictability on verifying with internal and external validation.

  6. PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies

    Dai, Xinbin; Wang, Qishan; Xu, Shizhong; Zhao, Patrick X.

    2016-01-01

    The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the ‘missing heritability,’ which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/. PMID:27224861

  7. Quantitative perturbation-based analysis of gene expression predicts enhancer activity in early Drosophila embryo.

    Sayal, Rupinder; Dresch, Jacqueline M; Pushel, Irina; Taylor, Benjamin R; Arnosti, David N

    2016-01-01

    Enhancers constitute one of the major components of regulatory machinery of metazoans. Although several genome-wide studies have focused on finding and locating enhancers in the genomes, the fundamental principles governing their internal architecture and cis-regulatory grammar remain elusive. Here, we describe an extensive, quantitative perturbation analysis targeting the dorsal-ventral patterning gene regulatory network (GRN) controlled by Drosophila NF-κB homolog Dorsal. To understand transcription factor interactions on enhancers, we employed an ensemble of mathematical models, testing effects of cooperativity, repression, and factor potency. Models trained on the dataset correctly predict activity of evolutionarily divergent regulatory regions, providing insights into spatial relationships between repressor and activator binding sites. Importantly, the collective predictions of sets of models were effective at novel enhancer identification and characterization. Our study demonstrates how experimental dataset and modeling can be effectively combined to provide quantitative insights into cis-regulatory information on a genome-wide scale. PMID:27152947

  8. Quantitative correspondence between the in vivo and in vitro activity of teratogenic agents.

    Braun, A G; Buckner, C A; Emerson, D J; Nichinson, B B

    1982-01-01

    We have tested 74 teratogenic and 28 nonteratogenic agents in a recently developed in vitro teratogen assay system. The assay identifies teratogens by their ability to inhibit attachment of ascites tumor cells to plastic surfaces coated with concanavalin A. There is a qualitative agreement between in vivo animal data and in vitro activity for 81 of the 102 agents (79%). Quantitative analysis shows a highly significant correlation coefficient of 0.69 between the inhibitory in vitro dose and th...

  9. Quantitative Measurement of Protease-Activity with Correction of Probe Delivery and Tissue Absorption Effects

    Salthouse, Christopher D.; Reynolds, Fred; Tam, Jenny M.; Josephson, Lee; Mahmood, Umar

    2009-01-01

    Proteases play important roles in a variety of pathologies from heart disease to cancer. Quantitative measurement of protease activity is possible using a novel spectrally matched dual fluorophore probe and a small animal lifetime imager. The recorded fluorescence from an activatable fluorophore, one that changes its fluorescent amplitude after biological target interaction, is also influenced by other factors including imaging probe delivery and optical tissue absorption of excitation and em...

  10. Quantitative structure–activity relationships (QSARs) for the transformation of organic micropollutants during oxidative water treatment

    Lee, Yunho; VON GUNTEN, Urs

    2012-01-01

    Various oxidants such as chlorine, chlorine dioxide, ferrateVI, ozone, and hydroxyl radicals can be applied for eliminating organic micropollutant by oxidative transformation during water treatment in systems such as drinking water, wastewater, and water reuse. Over the last decades, many second-order rate constants (k) have been determined for the reaction of these oxidants with model compounds and micropollutants. Good correlations (quantitative structure–activity relationships or QSARs) ar...

  11. Active illumination using a digital micromirror device for quantitative phase imaging

    Shin, Seungwoo; Kim, Kyoohyun; Yoon, Jonghee; Park, YongKeun

    2015-01-01

    We present a powerful and cost-effective method for active illumination using a digital micromirror device (DMD) for quantitative phase imaging techniques. Displaying binary illumination patterns on a DMD with appropriate spatial filtering, plane waves with various illumination angles are generated and impinged onto a sample. Complex optical fields of the sample obtained with various incident angles are then measured via Mach-Zehnder interferometry, from which a high-resolution two-dimensiona...

  12. Activated sludge characterization through microscopy: A review on quantitative image analysis and chemometric techniques

    Mesquita, Daniela P. [IBB-Institute for Biotechnology and Bioengineering, Centre of Biological Engineering, Universidade do Minho, Campus de Gualtar, 4710-057 Braga (Portugal); Amaral, A. Luís [IBB-Institute for Biotechnology and Bioengineering, Centre of Biological Engineering, Universidade do Minho, Campus de Gualtar, 4710-057 Braga (Portugal); Instituto Politécnico de Coimbra, ISEC, DEQB, Rua Pedro Nunes, Quinta da Nora, 3030-199 Coimbra (Portugal); Ferreira, Eugénio C., E-mail: ecferreira@deb.uminho.pt [IBB-Institute for Biotechnology and Bioengineering, Centre of Biological Engineering, Universidade do Minho, Campus de Gualtar, 4710-057 Braga (Portugal)

    2013-11-13

    Graphical abstract: -- Highlights: •Quantitative image analysis shows potential to monitor activated sludge systems. •Staining techniques increase the potential for detection of operational problems. •Chemometrics combined with quantitative image analysis is valuable for process monitoring. -- Abstract: In wastewater treatment processes, and particularly in activated sludge systems, efficiency is quite dependent on the operating conditions, and a number of problems may arise due to sludge structure and proliferation of specific microorganisms. In fact, bacterial communities and protozoa identification by microscopy inspection is already routinely employed in a considerable number of cases. Furthermore, quantitative image analysis techniques have been increasingly used throughout the years for the assessment of aggregates and filamentous bacteria properties. These procedures are able to provide an ever growing amount of data for wastewater treatment processes in which chemometric techniques can be a valuable tool. However, the determination of microbial communities’ properties remains a current challenge in spite of the great diversity of microscopy techniques applied. In this review, activated sludge characterization is discussed highlighting the aggregates structure and filamentous bacteria determination by image analysis on bright-field, phase-contrast, and fluorescence microscopy. An in-depth analysis is performed to summarize the many new findings that have been obtained, and future developments for these biological processes are further discussed.

  13. Activated sludge characterization through microscopy: A review on quantitative image analysis and chemometric techniques

    Graphical abstract: -- Highlights: •Quantitative image analysis shows potential to monitor activated sludge systems. •Staining techniques increase the potential for detection of operational problems. •Chemometrics combined with quantitative image analysis is valuable for process monitoring. -- Abstract: In wastewater treatment processes, and particularly in activated sludge systems, efficiency is quite dependent on the operating conditions, and a number of problems may arise due to sludge structure and proliferation of specific microorganisms. In fact, bacterial communities and protozoa identification by microscopy inspection is already routinely employed in a considerable number of cases. Furthermore, quantitative image analysis techniques have been increasingly used throughout the years for the assessment of aggregates and filamentous bacteria properties. These procedures are able to provide an ever growing amount of data for wastewater treatment processes in which chemometric techniques can be a valuable tool. However, the determination of microbial communities’ properties remains a current challenge in spite of the great diversity of microscopy techniques applied. In this review, activated sludge characterization is discussed highlighting the aggregates structure and filamentous bacteria determination by image analysis on bright-field, phase-contrast, and fluorescence microscopy. An in-depth analysis is performed to summarize the many new findings that have been obtained, and future developments for these biological processes are further discussed

  14. Estimating cost for integrated harvesting and related forest management activities

    Often the deciding factor in the economical recovery of wood fuel is its relationship with some other objective such as stand establishment, stand improvement, or forest access. The costs and benefits arising from these related management activities are discussed. Two different approaches to estimating the cost of producing conventional products and fuel wood with integrated harvesting systems are also examined. With a marginal cost approach, the cost of common harvesting activities such as felling, forwarding and processing/sorting are fully allocated to the conventional products. Under a joint product approach, the cost of production is distributed among conventional products and fuel wood. A model is developed showing the distribution of cost under both approaches for seven integrated harvesting systems. The results suggest that production costs are highly variable depending on the harvesting system used and the ratio of conventional products to fuel wood. The estimated cost of fuel wood varies from $6.74 (U.S. dollars) gt-1 to $37.05 gt-1 using joint product costing and from nil to $11.04 under the marginal cost method. (Author)

  15. Towards a quantitative, measurement-based estimate of the uncertainty in photon mass attenuation coefficients at radiation therapy energies

    In this study, a quantitative estimate is derived for the uncertainty in the XCOM photon mass attenuation coefficients in the energy range of interest to external beam radiation therapy—i.e. 100 keV (orthovoltage) to 25 MeV—using direct comparisons of experimental data against Monte Carlo models and theoretical XCOM data. Two independent datasets are used. The first dataset is from our recent transmission measurements and the corresponding EGSnrc calculations (Ali et al 2012 Med. Phys. 39 5990–6003) for 10–30 MV photon beams from the research linac at the National Research Council Canada. The attenuators are graphite and lead, with a total of 140 data points and an experimental uncertainty of ∼0.5% (k = 1). An optimum energy-independent cross section scaling factor that minimizes the discrepancies between measurements and calculations is used to deduce cross section uncertainty. The second dataset is from the aggregate of cross section measurements in the literature for graphite and lead (49 experiments, 288 data points). The dataset is compared to the sum of the XCOM data plus the IAEA photonuclear data. Again, an optimum energy-independent cross section scaling factor is used to deduce the cross section uncertainty. Using the average result from the two datasets, the energy-independent cross section uncertainty estimate is 0.5% (68% confidence) and 0.7% (95% confidence). The potential for energy-dependent errors is discussed. Photon cross section uncertainty is shown to be smaller than the current qualitative ‘envelope of uncertainty’ of the order of 1–2%, as given by Hubbell (1999 Phys. Med. Biol 44 R1–22). (paper)

  16. Bone mineral density changes during pregnancy in actively exercising women as measured by quantitative ultrasound

    To, William W. K.; Wong, Margaret W. N.

    2012-01-01

    Objective To evaluate whether bone mineral density (BMD) changes in women engaged in active exercises during pregnancy would be different from non-exercising women. Methods Consecutive patients with singleton pregnancies who were engaged in active exercise training during pregnancy were prospectively recruited over a period of 6 months. Quantitative USG measurements of the os calcis BMD were performed at 14–20 weeks and at 36–38 weeks. These patients were compared to a control cohort of non-e...

  17. Warm dust and aromatic bands as quantitative probes of star-formation activity

    Förster Schreiber, N. M.; Roussel, H.; Sauvage, M.; Charmandaris, V.

    2004-05-01

    We combine samples of spiral galaxies and starburst systems observed with ISOCAM on board ISO to investigate the reliability of mid-infrared dust emission as a quantitative tracer of star formation activity. The total sample covers very diverse galactic environments and probes a much wider dynamic range in star formation rate density than previous similar studies. We find that both the monochromatic 15 μm continuum and the 5-8.5 μm emission constitute excellent indicators of the star formation rate as quantified by the Lyman continuum luminosity LLyc, within specified validity limits which are different for the two tracers. Normalized to projected surface area, the 15 μm continuum luminosity Σ15 μm,ct is directly proportional to ΣLyc over several orders of magnitude. Two regimes are distinguished from the relative offsets in the observed relationship: the proportionality factor increases by a factor of ≈5 between quiescent disks in spiral galaxies, and moderate to extreme star-forming environments in circumnuclear regions of spirals and in starburst systems. The transition occurs near ΣLyc ˜ 102 L⊙ pc-2 and is interpreted as due to very small dust grains starting to dominate the emission at 15 μm over aromatic species above this threshold. The 5-8.5 μm luminosity per unit projected area is also directly proportional to the Lyman continuum luminosity, with a single conversion factor from the most quiescent objects included in the sample up to ΣLyc ˜ 104 L⊙ pc-2, where the relationship then flattens. The turnover is attributed to depletion of aromatic band carriers in the harsher conditions prevailing in extreme starburst environments. The observed relationships provide empirical calibrations useful for estimating star formation rates from mid-infrared observations, much less affected by extinction than optical and near-infrared tracers in deeply embedded H II regions and obscured starbursts, as well as for theoretical predictions from evolutionary

  18. A quantitative estimate on the heat transfer in cylindrical fuel rods to account for flux depression inside fuel

    In a nuclear reactor, the amount of power generation is limited by thermal rather than by nuclear considerations. The reactor core must be operated at a power level that the temperatures of the fuel and cladding anywhere in the core must not exceed safe limits so as to prevent from fuel element damages. Heat transfer from fuel pins can be calculated analytically by using a flat power density in the fuel pin. In actual practice, the neutron flux distribution inside fuel pins results in a smaller effective distance for the heat to be transported to the coolant. This inherent phenomenon gives rise to a heat transfer benefit in fuel pin temperatures. In this research, a quantitative estimate for transferring heat from cylindrical fuel rods is accomplished by considering a non-uniform neutron flux, which leads to a flux depression factor. This, in turn, shifts the temperature inside the fuel pin. A theoretical relationship combining the flux depression factor and a ratio of temperature gradients for uniform and non-uniform is derived, and a computational program, based on energy balance, is developed to validate the considered approximation. (author)

  19. Quantitative microbial risk assessment combined with hydrodynamic modelling to estimate the public health risk associated with bathing after rainfall events.

    Eregno, Fasil Ejigu; Tryland, Ingun; Tjomsland, Torulv; Myrmel, Mette; Robertson, Lucy; Heistad, Arve

    2016-04-01

    This study investigated the public health risk from exposure to infectious microorganisms at Sandvika recreational beaches, Norway and dose-response relationships by combining hydrodynamic modelling with Quantitative Microbial Risk Assessment (QMRA). Meteorological and hydrological data were collected to produce a calibrated hydrodynamic model using Escherichia coli as an indicator of faecal contamination. Based on average concentrations of reference pathogens (norovirus, Campylobacter, Salmonella, Giardia and Cryptosporidium) relative to E. coli in Norwegian sewage from previous studies, the hydrodynamic model was used for simulating the concentrations of pathogens at the local beaches during and after a heavy rainfall event, using three different decay rates. The simulated concentrations were used as input for QMRA and the public health risk was estimated as probability of infection from a single exposure of bathers during the three consecutive days after the rainfall event. The level of risk on the first day after the rainfall event was acceptable for the bacterial and parasitic reference pathogens, but high for the viral reference pathogen at all beaches, and severe at Kalvøya-small and Kalvøya-big beaches, supporting the advice of avoiding swimming in the day(s) after heavy rainfall. The study demonstrates the potential of combining discharge-based hydrodynamic modelling with QMRA in the context of bathing water as a tool to evaluate public health risk and support beach management decisions. PMID:26802355

  20. Rapid and quantitative measuring of telomerase activity using an electrochemiluminescent sensor

    Zhou, Xiaoming; Xing, Da; Zhu, Debin; Jia, Li

    2007-11-01

    Telomerase, a ribonucleoprotein enzyme that adds telomeric repeats to the 3'end of chromosomal DNA for maintaining chromosomal integrity and stability. This strong association of telomerase activity with tumors establishing it is the most widespread cancer marker. A number of assays based on the polymerase chain reaction (PCR) have been developed for the evaluation of telomerase activity. However, those methods require gel electrophoresis and some staining procedures. We developed an electrochemiluminescent (ECL) sensor for the measuring of telomerase activity to overcome these problems such as troublesome post-PCR procedures and semi-quantitative assessment in the conventional method. In this assay 5'-biotinylated telomerase synthesis (TS) primer serve as the substrate for the extension of telomeric repeats under telomerase. The extension products were amplified with this TS primer and a tris-(2'2'-bipyridyl) ruthenium (TBR)-labeled reversed primer. The amplified products was separated and enriched in the surface of electrode by streptavidin-coated magnetic beads, and detected by measuring the ECL signals of the TBR labeled. Measuring telomerase activity use the sensor is easy, sensitive, rapid, and applicable to quantitative analysis, should be clinically useful for the detection and monitoring of telomerase activity.

  1. Altered resting-state functional activity in posttraumatic stress disorder: A quantitative meta-analysis.

    Wang, Ting; Liu, Jia; Zhang, Junran; Zhan, Wang; Li, Lei; Wu, Min; Huang, Hua; Zhu, Hongyan; Kemp, Graham J; Gong, Qiyong

    2016-01-01

    Many functional neuroimaging studies have reported differential patterns of spontaneous brain activity in posttraumatic stress disorder (PTSD), but the findings are inconsistent and have not so far been quantitatively reviewed. The present study set out to determine consistent, specific regional brain activity alterations in PTSD, using the Effect Size Signed Differential Mapping technique to conduct a quantitative meta-analysis of resting-state functional neuroimaging studies of PTSD that used either a non-trauma (NTC) or a trauma-exposed (TEC) comparison control group. Fifteen functional neuroimaging studies were included, comparing 286 PTSDs, 203 TECs and 155 NTCs. Compared with NTC, PTSD patients showed hyperactivity in the right anterior insula and bilateral cerebellum, and hypoactivity in the dorsal medial prefrontal cortex (mPFC); compared with TEC, PTSD showed hyperactivity in the ventral mPFC. The pooled meta-analysis showed hypoactivity in the posterior insula, superior temporal, and Heschl's gyrus in PTSD. Additionally, subgroup meta-analysis (non-medicated subjects vs. NTC) identified abnormal activation in the prefrontal-limbic system. In meta-regression analyses, mean illness duration was positively associated with activity in the right cerebellum (PTSD vs. NTC), and illness severity was negatively associated with activity in the right lingual gyrus (PTSD vs. TEC). PMID:27251865

  2. Quantitative Metabolomics:Analysis on Active Components in Extracts from Kaki Folium

    DAI Li-peng; GU Yuan; YIN Ren-jie; LIU Chang-xiao; SI Duan-yun

    2012-01-01

    Objective In order to analyze the active components in the extracts from Kaki Folium(KF),quantitative metabolomics approach was adopted to investigate the number of active components existing among the different extracts and their variation.Methods LC-MS method was established for the quantitative determination of the active components taking the mixture with reference substance as tested sample.Results In terms of the number of active components and amount presented in the different tested samples of KF extracted by many types of solvents,variation was observed.But rutin,astragalin,and kaempferol were presented in all samples.Difference was found between the samples extracted from the products on market and from the raw materials of KF processed by polar solvents with different recipes.However,the three active components were found in all samples examined.Conclusion These results might be valuable as all information and could be used for the optimization of raw materials extraction procedure to enhance the productivity.

  3. Impact of high 131I-activities on quantitative 124I-PET

    Braad, P. E. N.; Hansen, S. B.; Høilund-Carlsen, P. F.

    2015-07-01

    Peri-therapeutic 124 I-PET/CT is of interest as guidance for radioiodine therapy. Unfortunately, image quality is complicated by dead time effects and increased random coincidence rates from high 131 I-activities. A series of phantom experiments with clinically relevant 124 I/131 I-activities were performed on a clinical PET/CT-system. Noise equivalent count rate (NECR) curves and quantitation accuracy were determined from repeated scans performed over several weeks on a decaying NEMA NU-2 1994 cylinder phantom initially filled with 25 MBq 124 I and 1250 MBq 131 I. Six spherical inserts with diameters 10-37 mm were filled with 124 I (0.45 MBq ml-1 ) and 131 I (22 MBq ml-1 ) and placed inside the background of the NEMA/IEC torso phantom. Contrast recovery, background variability and the accuracy of scatter and attenuation corrections were assessed at sphere-to-background activity ratios of 20, 10 and 5. Results were compared to pure 124 I-acquisitions. The quality of 124 I-PET images in the presence of high 131 I-activities was good and image quantification unaffected except at very high count rates. Quantitation accuracy and contrast recovery were uninfluenced at 131 I-activities below 1000 MBq, whereas image noise was slightly increased. The NECR peaked at 550 MBq of 131 I, where it was 2.8 times lower than without 131 I in the phantom. Quantitative peri-therapeutic 124 I-PET is feasible.

  4. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    Jaiswal, P.; van Westen, C. J.; Jetten, V.

    2011-06-01

    A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the

  5. Quantitative estimation of landslide risk from rapid debris slides on natural slopes in the Nilgiri hills, India

    P. Jaiswal

    2011-06-01

    Full Text Available A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms.

    An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides, while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures.

    For landslide hazard assessment the following information was derived: (1 landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2 the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3 landslide susceptible zones, obtained using a logistic regression model, and (4 distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve. The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents.

    Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in

  6. Microfluorometric mithramycin assay for quantitating the effects of immunotoxicants on lymphocyte activation

    A semiautomated, microfluorometric assay has been developed for the detection of toxicant-induced changes in lymphocyte DNA content at standard intervals after mitogen activation. DNA is quantitated by solubilizing the cells and determining the fluorescence enhancement that results from formation of the highly specific mithramycin:DNA adduct. The limit of detection is 0.21 μg (30,000 resting cell equivalents) per microliter well. Correlation with the less sensitive, nonautomatable, diphenylamine DNA assay give a correlation coefficient r = 0.91. Prototype substances representative of true immunotoxicants (prostaglandin E2) and common interfering substances (thymidine at 14 M) have been tested. The latter substance produces false positive results in the standard [3H] thymidine assay. The mithramycin assay does not inappropriately detect this interfering substance. It has the characteristics of a highly specific, accurate technique of screening and quantitating immunotoxic drugs, agents, and mediators in patient sera and other complex biological fluids

  7. Estimated doses from decommissioning activities at commercial nuclear power stations

    This paper reviews generic population dose estimates for decommissioning reference boiling water reactors (BWRs) and pressurized water reactors (PWRs) and provides extrapolated estimates of the total collective dose resulting from decommissioning commercial nuclear reactors operated in the United States. Decontamination and decommissioning of retired nuclear power reactors is a necessary part of the nuclear fuel cycle. During decommissioning of large facilities, radioactivity will be encountered in activated reactor components and in contaminated piping, equipment, and building surfaces. The US Nuclear Regulatory Commission (NRC) sponsored a series of studies to evaluate the technology, safety, and costs of decommissioning a variety of nuclear fuel cycle facilities. The NRC adopted the following standardized definitions concerning decommissioning: (1) decommissioning: the measures taken at the end of a facility's operating lifetime to ensure the protection of the public from any residual radioactivity or other hazards present in the facility; (2) DECON: immediate decontamination leading to the release of the facility for unrestricted use; (3) SAFSTOR: safe storage plus deferred decontamination leading to release of the facility for unrestricted use; and (4) ENTOMB: entombment plus decay leading to release of the facility for unrestricted use. In the NRC studies, the most likely decommissioning alternative for most facilities was assumed to be DECON or SAFSTOR

  8. Towards cheminformatics-based estimation of drug therapeutic index: Predicting the protective index of anticonvulsants using a new quantitative structure-index relationship approach.

    Chen, Shangying; Zhang, Peng; Liu, Xin; Qin, Chu; Tao, Lin; Zhang, Cheng; Yang, Sheng Yong; Chen, Yu Zong; Chui, Wai Keung

    2016-06-01

    The overall efficacy and safety profile of a new drug is partially evaluated by the therapeutic index in clinical studies and by the protective index (PI) in preclinical studies. In-silico predictive methods may facilitate the assessment of these indicators. Although QSAR and QSTR models can be used for predicting PI, their predictive capability has not been evaluated. To test this capability, we developed QSAR and QSTR models for predicting the activity and toxicity of anticonvulsants at accuracy levels above the literature-reported threshold (LT) of good QSAR models as tested by both the internal 5-fold cross validation and external validation method. These models showed significantly compromised PI predictive capability due to the cumulative errors of the QSAR and QSTR models. Therefore, in this investigation a new quantitative structure-index relationship (QSIR) model was devised and it showed improved PI predictive capability that superseded the LT of good QSAR models. The QSAR, QSTR and QSIR models were developed using support vector regression (SVR) method with the parameters optimized by using the greedy search method. The molecular descriptors relevant to the prediction of anticonvulsant activities, toxicities and PIs were analyzed by a recursive feature elimination method. The selected molecular descriptors are primarily associated with the drug-like, pharmacological and toxicological features and those used in the published anticonvulsant QSAR and QSTR models. This study suggested that QSIR is useful for estimating the therapeutic index of drug candidates. PMID:27262528

  9. Estimation of competition activity of hockey players high class taking into account generic models

    Andrey Mikhnov

    2014-12-01

    Full Text Available Purpose: to develop the mechanism of estimation of competition activity of hockey players of high class on the basis of account of group model descriptions of tekhniko-tactical actions. Material and Methods: with the purpose of development of method of evaluation of efficiency of competition activity, information of hockey players was analysed highly who class, taking part in the matches of the Kontinental hockey league (KHL in a season 2013-2014 The quantitative-quality indexes of realization of technique-tactical actions were analysed in the matches of regular championship. Material and Methods: pedagogical supervision, pedagogical analysis and generalization of front-rank experience, analysis of data of the special scientific-methodical literature, an analysis of data is the Internet. Results: model descriptions of competition activity of hockey players of high class of different line of business are developed, which underlay development of evaluation method. For determination of efficiency of competition activity, it is recommended to take into account the degree of positive or subzero deviation from middle model descriptions. Evaluation of efficiency of actions of sportsman, conducted on the basis of complex rejection on all studied technique-tactical actions. Conclusions: as a result of the conducted researches the method of evaluation of efficiency of competition activity is developed, which allowed to define efficiency of game of sportsmen.

  10. Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation.

    Eickhoff, Simon B; Nichols, Thomas E; Laird, Angela R; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Claudia R

    2016-08-15

    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. PMID:27179606